Hugging Face (Twitter)
RT @HaihaoShen: 🥳Qwen3-Coder-30B-A3B INT4 & INT2 GGUF models are available now -
https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-int4-AutoRound
https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-gguf-q2ks-mixed-AutoRound
#intel #int4 #autoround #huggingface
RT @HaihaoShen: 🥳Qwen3-Coder-30B-A3B INT4 & INT2 GGUF models are available now -
https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-int4-AutoRound
https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-gguf-q2ks-mixed-AutoRound
#intel #int4 #autoround #huggingface
huggingface.co
Intel/Qwen3-Coder-30B-A3B-Instruct-int4-AutoRound · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @Alibaba_Qwen: 🚀 Meet Qwen-Image — a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source.
🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully integrated
🔹 Bilingual support, diverse fonts, complex layouts
🎨 Also excels at general image generation — from photorealistic to anime, impressionist to minimalist. A true creative powerhouse.
Blog:https://qwenlm.github.io/blog/qwen-image/
Hugging Face:https://huggingface.co/Qwen/Qwen-Image
ModelScope:https://modelscope.cn/models/Qwen/Qwen-Image
Github:github.com/QwenLM/Qwen-Image
Technical report:https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf
Demo: https://modelscope.cn/aigc/imageGeneration?tab=advanced
RT @Alibaba_Qwen: 🚀 Meet Qwen-Image — a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source.
🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully integrated
🔹 Bilingual support, diverse fonts, complex layouts
🎨 Also excels at general image generation — from photorealistic to anime, impressionist to minimalist. A true creative powerhouse.
Blog:https://qwenlm.github.io/blog/qwen-image/
Hugging Face:https://huggingface.co/Qwen/Qwen-Image
ModelScope:https://modelscope.cn/models/Qwen/Qwen-Image
Github:github.com/QwenLM/Qwen-Image
Technical report:https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf
Demo: https://modelscope.cn/aigc/imageGeneration?tab=advanced
Hugging Face (Twitter)
RT @_fracapuano: We shipped @LeRobotHF to its first major release, on Pypi and GitHub.
Alongside the team at @huggingface we’re making robotics more accessible, collaborative, and we hope this release makes contributing easier and better.
Links in 🧵
RT @_fracapuano: We shipped @LeRobotHF to its first major release, on Pypi and GitHub.
Alongside the team at @huggingface we’re making robotics more accessible, collaborative, and we hope this release makes contributing easier and better.
Links in 🧵
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jandotai: Hugging Face 🤝 Jan
You can now use Hugging Face as a remote model provider in Jan.
Go to Settings -> Model Providers -> add your Hugging Face API key. Then open a new chat and pick a model from @huggingface.
Works with any model in Hugging Face in Jan.
RT @jandotai: Hugging Face 🤝 Jan
You can now use Hugging Face as a remote model provider in Jan.
Go to Settings -> Model Providers -> add your Hugging Face API key. Then open a new chat and pick a model from @huggingface.
Works with any model in Hugging Face in Jan.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @abidlabs: New Gradio component: 🥳 gr.Dialogue:
• As an output, it can be used to show diarized speech transcription
• As input, it's perfect for multispeaker TTS models, as it also supports auto-complete tags 🪄
Try it out in Gradio 5.40!
RT @abidlabs: New Gradio component: 🥳 gr.Dialogue:
• As an output, it can be used to show diarized speech transcription
• As input, it's perfect for multispeaker TTS models, as it also supports auto-complete tags 🪄
Try it out in Gradio 5.40!
Hugging Face (Twitter)
RT @jackvial89: I've created a @LeRobotHF @huggingface dataset for the screwdriver robot. This dataset contains 391 human demonstrations of attaching a part with a screw in 3 positions: left, right, center. Currently training a few different models on this dataset!
RT @jackvial89: I've created a @LeRobotHF @huggingface dataset for the screwdriver robot. This dataset contains 391 human demonstrations of attaching a part with a screw in 3 positions: left, right, center. Currently training a few different models on this dataset!
Hugging Face (Twitter)
RT @RisingSayak: Wait is over 🤯
An Apache 2.0 DiT-based image generation model from @Alibaba_Qwen -- Qwen-Image 🔥
Supported in Diffusers. Training script PR is up and should be merged soon.
Go, fire!
RT @RisingSayak: Wait is over 🤯
An Apache 2.0 DiT-based image generation model from @Alibaba_Qwen -- Qwen-Image 🔥
Supported in Diffusers. Training script PR is up and should be merged soon.
Go, fire!
Hugging Face (Twitter)
RT @_lewtun: One line of code is all it takes to fine-tune the gpt-oss models from @OpenAI 🔥
> Support to target the MoE expert layers with PEFT
> Kernels for FlashAttention3 & MegaBlocks
> Fast inference with MXFP4 quantization format
In our testing, these models are extremely efficient to tune and can be adapted to new domains with just a few 100 samples 🤯
Download the models: huggingface.co/openai
Training & inference recipes: https://github.com/huggingface/gpt-oss-recipes/tree/main
RT @_lewtun: One line of code is all it takes to fine-tune the gpt-oss models from @OpenAI 🔥
> Support to target the MoE expert layers with PEFT
> Kernels for FlashAttention3 & MegaBlocks
> Fast inference with MXFP4 quantization format
In our testing, these models are extremely efficient to tune and can be adapted to new domains with just a few 100 samples 🤯
Download the models: huggingface.co/openai
Training & inference recipes: https://github.com/huggingface/gpt-oss-recipes/tree/main
Hugging Face (Twitter)
RT @mervenoyann: gpt-oss @OpenAI is here! 🔥
> two MoEs with 21B/3.6B and 117B/5.1B total/active params, efficient reasoning models 🤯
> use & fine-tune with transformers & TRL 🛠️
> inference powered by @huggingface Inference Providers 🫡
> apache 2.0 license 💗
RT @mervenoyann: gpt-oss @OpenAI is here! 🔥
> two MoEs with 21B/3.6B and 117B/5.1B total/active params, efficient reasoning models 🤯
> use & fine-tune with transformers & TRL 🛠️
> inference powered by @huggingface Inference Providers 🫡
> apache 2.0 license 💗
Hugging Face (Twitter)
RT @multimodalart: the gpt-oss model is really easy to tune!
get started with customizing/fine-tuning to make gpt-oss your own with the @OpenAI + @huggingface cookbook 🤝
https://cookbook.openai.com/articles/gpt-oss/fine-tune-transfomers
RT @multimodalart: the gpt-oss model is really easy to tune!
get started with customizing/fine-tuning to make gpt-oss your own with the @OpenAI + @huggingface cookbook 🤝
https://cookbook.openai.com/articles/gpt-oss/fine-tune-transfomers
Hugging Face (Twitter)
RT @reach_vb: OpenAI COOKED! That's an Apache 2.0 licensed 120B apache 2.0 licensed model competing with OpenAI O3 🤯
> 120B and 20B models
> 128K context
> First open model to be able to tool call in CoT
> Released with optimised kernels
Apache 2.0 license! What a landmark release - Kudos @OpenAIDevs 🤗
RT @reach_vb: OpenAI COOKED! That's an Apache 2.0 licensed 120B apache 2.0 licensed model competing with OpenAI O3 🤯
> 120B and 20B models
> 128K context
> First open model to be able to tool call in CoT
> Released with optimised kernels
Apache 2.0 license! What a landmark release - Kudos @OpenAIDevs 🤗
Hugging Face (Twitter)
RT @reach_vb: The best open model currently available on Inference Providers, blazing fast! Powered by @CerebrasSystems 🔥
Try it out today! https://twitter.com/reach_vb/status/1952782804023988557#m
RT @reach_vb: The best open model currently available on Inference Providers, blazing fast! Powered by @CerebrasSystems 🔥
Try it out today! https://twitter.com/reach_vb/status/1952782804023988557#m
Hugging Face (Twitter)
RT @ClementDelangue: When @sama told me at the AI summit in Paris that they were serious about releasing open-source models & asked what would be useful, I couldn’t believe it.
But six months of collaboration later, here it is: Welcome to OSS-GPT on @huggingface! It comes in two sizes, for both maximum reasoning capabilities & on-device, cheaper, faster option, all apache 2.0. It’s integrated with our inference partners that power the official demo.
This open-source release is critically important & timely, because as @WhiteHouse emphasized in the US Action plan, we need stronger American open-source AI foundations. And who could do that better than the very startup that has been pioneering and leading the field in so many ways.
Feels like a plot twist.
Feels like a comeback.
Feels like the beginning of something big, let’s go open-source AI 🔥🔥🔥
RT @ClementDelangue: When @sama told me at the AI summit in Paris that they were serious about releasing open-source models & asked what would be useful, I couldn’t believe it.
But six months of collaboration later, here it is: Welcome to OSS-GPT on @huggingface! It comes in two sizes, for both maximum reasoning capabilities & on-device, cheaper, faster option, all apache 2.0. It’s integrated with our inference partners that power the official demo.
This open-source release is critically important & timely, because as @WhiteHouse emphasized in the US Action plan, we need stronger American open-source AI foundations. And who could do that better than the very startup that has been pioneering and leading the field in so many ways.
Feels like a plot twist.
Feels like a comeback.
Feels like the beginning of something big, let’s go open-source AI 🔥🔥🔥
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @romainhuet: Today’s a big day! We have something really exciting to share with the open-source community.
We’re launching two open-weight language models: gpt-oss-120b and gpt-oss-20b.
They’re incredible models, built for developers, trained for reasoning, efficiency, and real-world use.🧵
RT @romainhuet: Today’s a big day! We have something really exciting to share with the open-source community.
We’re launching two open-weight language models: gpt-oss-120b and gpt-oss-20b.
They’re incredible models, built for developers, trained for reasoning, efficiency, and real-world use.🧵
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: OpenAI just released GPT-OSS: An Open Source Language Model on Hugging Face
Open source meaning:
💸 Free
🔒 Private
🔧 Customizable
RT @dylan_ebert_: OpenAI just released GPT-OSS: An Open Source Language Model on Hugging Face
Open source meaning:
💸 Free
🔒 Private
🔧 Customizable
Hugging Face (Twitter)
RT @romainhuet: We built a gpt-oss developer playground so you can try the models right away:
• Choose your model and set the reasoning effort 🎛️
• See the model’s raw chain-of-thought for debugging and research 🧠
• Get a handful of free messages, and sign in with @huggingface for more 🤗
RT @romainhuet: We built a gpt-oss developer playground so you can try the models right away:
• Choose your model and set the reasoning effort 🎛️
• See the model’s raw chain-of-thought for debugging and research 🧠
• Get a handful of free messages, and sign in with @huggingface for more 🤗