Hugging Face (Twitter)
RT @ClementDelangue: It's out! and you can already run inference on the HF model page thanks to @hyperbolic_labs! https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct https://twitter.com/ClementDelangue/status/1947753298879975771#m
RT @ClementDelangue: It's out! and you can already run inference on the HF model page thanks to @hyperbolic_labs! https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct https://twitter.com/ClementDelangue/status/1947753298879975771#m
Hugging Face (Twitter)
RT @itsPaulAi: Wait so Alibaba Qwen has just released ANOTHER model??
Qwen3-Coder is simply one of the best coding model we've ever seen.
→ Still 100% open source
→ Up to 1M context window 🔥
→ 35B active parameters
→ Same performance as Sonnet 4
They're releasing a CLI tool as well ↓
RT @itsPaulAi: Wait so Alibaba Qwen has just released ANOTHER model??
Qwen3-Coder is simply one of the best coding model we've ever seen.
→ Still 100% open source
→ Up to 1M context window 🔥
→ 35B active parameters
→ Same performance as Sonnet 4
They're releasing a CLI tool as well ↓
Hugging Face (Twitter)
RT @AdinaYakup: Qwen3-Coder 💻 agentic code model by @Alibaba_Qwen
https://huggingface.co/collections/Qwen/qwen3-coder-687fc861e53c939e52d52d10
✨ 480B total, 35B activated MoE
✨ Agentic Coding + Browser Use → Top code model performance
✨ 256K context (up to 1M via Yarn) for repo-scale understanding
RT @AdinaYakup: Qwen3-Coder 💻 agentic code model by @Alibaba_Qwen
https://huggingface.co/collections/Qwen/qwen3-coder-687fc861e53c939e52d52d10
✨ 480B total, 35B activated MoE
✨ Agentic Coding + Browser Use → Top code model performance
✨ 256K context (up to 1M via Yarn) for repo-scale understanding
Hugging Face (Twitter)
RT @cline: Live in Cline: Qwen3-235B-A22B-2507
Open source and with a 262k context window, the latest from Qwen is impressive on the benchmarks, but we're looking forward to its real world performance.
fyi, here's the model name in Cline:
qwen/qwen3-235b-a22b-07-25 https://twitter.com/Alibaba_Qwen/status/1947344511988076547#m
RT @cline: Live in Cline: Qwen3-235B-A22B-2507
Open source and with a 262k context window, the latest from Qwen is impressive on the benchmarks, but we're looking forward to its real world performance.
fyi, here's the model name in Cline:
qwen/qwen3-235b-a22b-07-25 https://twitter.com/Alibaba_Qwen/status/1947344511988076547#m
Hugging Face (Twitter)
RT @deedydas: The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
RT @deedydas: The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
Hugging Face (Twitter)
RT @Alibaba_Qwen: >>> Qwen3-Coder is here! ✅
We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic coding benchmarks among open models, including SWE-bench-Verified!!! 🚀
Alongside the model, we're also open-sourcing a command-line tool for agentic coding: Qwen Code. Forked from Gemini Code, it includes custom prompts and function call protocols to fully unlock Qwen3-Coder’s capabilities. Qwen3-Coder works seamlessly with the community’s best developer tools. As a foundation model, we hope it can be used anywhere across the digital world — Agentic Coding in the World!
💬 Chat: chat.qwen.ai/
📚 Blog: https://qwenlm.github.io/blog/qwen3-coder/
🤗 Model: https://hf.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
🤖 Qwen Code: github.com/QwenLM/qwen-code
RT @Alibaba_Qwen: >>> Qwen3-Coder is here! ✅
We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic coding benchmarks among open models, including SWE-bench-Verified!!! 🚀
Alongside the model, we're also open-sourcing a command-line tool for agentic coding: Qwen Code. Forked from Gemini Code, it includes custom prompts and function call protocols to fully unlock Qwen3-Coder’s capabilities. Qwen3-Coder works seamlessly with the community’s best developer tools. As a foundation model, we hope it can be used anywhere across the digital world — Agentic Coding in the World!
💬 Chat: chat.qwen.ai/
📚 Blog: https://qwenlm.github.io/blog/qwen3-coder/
🤗 Model: https://hf.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
🤖 Qwen Code: github.com/QwenLM/qwen-code
Hugging Face (Twitter)
RT @itsPaulAi: Alibaba Qwen has just released a non-thinking model even more powerful than Kimi K2...
And even better than Claude Opus 4 🤯
→ 100% open source
→ Only 22B active parameters
→ Available for free in Qwen Chat
All the links below
RT @itsPaulAi: Alibaba Qwen has just released a non-thinking model even more powerful than Kimi K2...
And even better than Claude Opus 4 🤯
→ 100% open source
→ Only 22B active parameters
→ Available for free in Qwen Chat
All the links below
Hugging Face (Twitter)
RT @reach_vb: Qwen on a rampage 🔥 - Qwen3 Coder 480B (35B Active), beats Kimi K2 AND Claude Sonnet 4 - Apache 2.0 licensed!
> Upto 1M context
> Non-Reasoning
> Supports agentic and browser mode
So so excited for the smaller models in the series 🤗
RT @reach_vb: Qwen on a rampage 🔥 - Qwen3 Coder 480B (35B Active), beats Kimi K2 AND Claude Sonnet 4 - Apache 2.0 licensed!
> Upto 1M context
> Non-Reasoning
> Supports agentic and browser mode
So so excited for the smaller models in the series 🤗
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: NEW: Higgs Audio V2 from @boson_ai open, unified TTS model w/ voice cloning, beats GPT 4o mini tts and ElevenLabs v2 🔥
> Trained on 10M hours (speech, music, events)
> Built on top of Llama 3.2 3B
> Works real-time and on edge
> Beats GPT-4o-mini-tts, ElevenLabs v2 in prosody & emotion Multi-speaker dialog
> Zero-shot voice cloning 🤩
> Available on Hugging Face
Kudos to folks at Boson AI for releasing such a brilliant work and all the details around the model! 🤗
RT @reach_vb: NEW: Higgs Audio V2 from @boson_ai open, unified TTS model w/ voice cloning, beats GPT 4o mini tts and ElevenLabs v2 🔥
> Trained on 10M hours (speech, music, events)
> Built on top of Llama 3.2 3B
> Works real-time and on edge
> Beats GPT-4o-mini-tts, ElevenLabs v2 in prosody & emotion Multi-speaker dialog
> Zero-shot voice cloning 🤩
> Available on Hugging Face
Kudos to folks at Boson AI for releasing such a brilliant work and all the details around the model! 🤗
Hugging Face (Twitter)
RT @multimodalart: this is all you need to use the newest Qwen3-Coder-480B + cli using @huggingface Inference Providers
on benchmarks, it's competitive with Claude Code, now it's vibe check time ✨
RT @multimodalart: this is all you need to use the newest Qwen3-Coder-480B + cli using @huggingface Inference Providers
on benchmarks, it's competitive with Claude Code, now it's vibe check time ✨
Hugging Face (Twitter)
RT @ClementDelangue: It’s time for the American AI community to wake up, drop the "open is not safe" bullshit, and return to its roots: open science and open-source AI, powered by an unmatched community of frontier labs, big tech, startups, universities, and non‑profits.
If we don’t, we’ll be forced to build on foreign foundations and risk losing the race in AI altogether by lack of local innovation and competition.
Let’s go! https://twitter.com/ClementDelangue/status/1948028133640146960#m
RT @ClementDelangue: It’s time for the American AI community to wake up, drop the "open is not safe" bullshit, and return to its roots: open science and open-source AI, powered by an unmatched community of frontier labs, big tech, startups, universities, and non‑profits.
If we don’t, we’ll be forced to build on foreign foundations and risk losing the race in AI altogether by lack of local innovation and competition.
Let’s go! https://twitter.com/ClementDelangue/status/1948028133640146960#m
Hugging Face (Twitter)
RT @ClementDelangue: DeepSite crossed 10k likes and is now the third most popular space ever. Create a website with natural language for free or almost thanks to open-source AI models like Qwen3 Coder, Kimi K2, or Deepseek...
Mind-blowing how this whole topic of AI powered website creation has been exploding! https://huggingface.co/spaces/enzostvs/deepsite
RT @ClementDelangue: DeepSite crossed 10k likes and is now the third most popular space ever. Create a website with natural language for free or almost thanks to open-source AI models like Qwen3 Coder, Kimi K2, or Deepseek...
Mind-blowing how this whole topic of AI powered website creation has been exploding! https://huggingface.co/spaces/enzostvs/deepsite
Hugging Face (Twitter)
RT @charles_irl: ICYMI, open models for transcription are very good now. In just the last few months, we've gotten @nvidia Parakeet and Canary, @kyutai_labs STT, and @MistralAI Voxtral.
Running your own transcription at scale is now 100x faster and 100x cheaper than using a proprietary API.
RT @charles_irl: ICYMI, open models for transcription are very good now. In just the last few months, we've gotten @nvidia Parakeet and Canary, @kyutai_labs STT, and @MistralAI Voxtral.
Running your own transcription at scale is now 100x faster and 100x cheaper than using a proprietary API.
Hugging Face (Twitter)
RT @lucataco93: You can now run Kontext LoRAs via Huggingface 🤗
https://huggingface.co/fofr/kontext-make-person-real
RT @lucataco93: You can now run Kontext LoRAs via Huggingface 🤗
https://huggingface.co/fofr/kontext-make-person-real
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @victormustar: New TTS bomb dropped on Hugging Face. It has multispeaker support and the output quality looks amazing 🤯
RT @victormustar: New TTS bomb dropped on Hugging Face. It has multispeaker support and the output quality looks amazing 🤯
Hugging Face (Twitter)
RT @jetbrains: Not every developer task requires a general-purpose LLM.
We’re betting on specialized focal LLMs – smaller, faster, and focused.
Join @jetbrains and @huggingface for a livestream on how focal models like Mellum will shape the industry.
📅 July 29, 6 pm CET
👉 Save your spot: jb.gg/45n7t8
RT @jetbrains: Not every developer task requires a general-purpose LLM.
We’re betting on specialized focal LLMs – smaller, faster, and focused.
Join @jetbrains and @huggingface for a livestream on how focal models like Mellum will shape the industry.
📅 July 29, 6 pm CET
👉 Save your spot: jb.gg/45n7t8
Hugging Face (Twitter)
RT @Alibaba_Qwen: 🚀 Introducing Qwen3-MT – our most powerful translation model yet!
Trained on trillions of multilingual tokens, it supports 92+ languages—covering 95%+ of the world’s population. 🌍✨
🔑 Why Qwen3-MT?
✅ Top-tier translation quality
✅ Customizable: terminology control, domain prompts, TM
✅ Ultra-fast & cost-effective: from $0.5/million tokens (MoE)
✅ Built for scale: low latency, high concurrency
Enhanced with reinforcement learning for unmatched fluency & accuracy.
Now available via the Qwen API – start breaking language barriers today! 💬🌐
Hugging Face Demo:https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo
ModelScope Demo:https://modelscope.cn/studios/Qwen/Qwen3-MT-demo
API Doc:https://www.alibabacloud.com/help/en/model-studio/translation-abilities
Blog:https://qwenlm.github.io/blog/qwen-mt/
RT @Alibaba_Qwen: 🚀 Introducing Qwen3-MT – our most powerful translation model yet!
Trained on trillions of multilingual tokens, it supports 92+ languages—covering 95%+ of the world’s population. 🌍✨
🔑 Why Qwen3-MT?
✅ Top-tier translation quality
✅ Customizable: terminology control, domain prompts, TM
✅ Ultra-fast & cost-effective: from $0.5/million tokens (MoE)
✅ Built for scale: low latency, high concurrency
Enhanced with reinforcement learning for unmatched fluency & accuracy.
Now available via the Qwen API – start breaking language barriers today! 💬🌐
Hugging Face Demo:https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo
ModelScope Demo:https://modelscope.cn/studios/Qwen/Qwen3-MT-demo
API Doc:https://www.alibabacloud.com/help/en/model-studio/translation-abilities
Blog:https://qwenlm.github.io/blog/qwen-mt/
Hugging Face (Twitter)
RT @PyTorch: SmolLM3-3B-8da4w: With #TorchAO & optimum-executorch, quantizing and exporting for mobile is a breeze.
Now ready for on-device deployment with #ExecuTorch, running at 15 tokens/sec on Galaxy S22. 🔗 Model card with recipes + checkpoints: hubs.la/Q03yGyTN0
#EdgeAI #PyTorch
RT @PyTorch: SmolLM3-3B-8da4w: With #TorchAO & optimum-executorch, quantizing and exporting for mobile is a breeze.
Now ready for on-device deployment with #ExecuTorch, running at 15 tokens/sec on Galaxy S22. 🔗 Model card with recipes + checkpoints: hubs.la/Q03yGyTN0
#EdgeAI #PyTorch
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @novita_labs: ⚡️Qwen3-235B-A22B-2507 supported by Novita, is also live on Hugging Face!
☑️ Function Call
☑️ Structured Output
Play with it 👇
RT @novita_labs: ⚡️Qwen3-235B-A22B-2507 supported by Novita, is also live on Hugging Face!
☑️ Function Call
☑️ Structured Output
Play with it 👇
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @pollenrobotics: New Unity package available: Reachy 2's digital twin!
- Gives immersive 3D experience through AR/VR
- Fully controllable via Reachy 2 stack
- Perfect for robotics courses & HRI research
Explore robotics without the physical robot!
https://github.com/pollen-robotics/Reachy2-UnityDigitalTwin
RT @pollenrobotics: New Unity package available: Reachy 2's digital twin!
- Gives immersive 3D experience through AR/VR
- Fully controllable via Reachy 2 stack
- Perfect for robotics courses & HRI research
Explore robotics without the physical robot!
https://github.com/pollen-robotics/Reachy2-UnityDigitalTwin