Hugging Face (Twitter)
RT @nic_o_martin: Beyond happy to announce that I'm joining π€ @huggingface as a #MachineLearningEngineer focused on #WebML!
RT @nic_o_martin: Beyond happy to announce that I'm joining π€ @huggingface as a #MachineLearningEngineer focused on #WebML!
Hugging Face (Twitter)
RT @ClementDelangue: Now number one trending dataset on @huggingface, out of almost half a million! huggingface.co/datasets https://twitter.com/NousResearch/status/1945181587600982450#m
RT @ClementDelangue: Now number one trending dataset on @huggingface, out of almost half a million! huggingface.co/datasets https://twitter.com/NousResearch/status/1945181587600982450#m
Hugging Face (Twitter)
RT @MaziyarPanahi: Perfect Sunday: I just used Kimi-K2 by @Kimi_Moonshot to vibe code a @Gradio app! π₯
You can use "Anycoder" Space by @_akhaliq hosted on @huggingface for free. It was super quick! π€
PS: I am aware of using Gradio to vibe code another Gradio! Pun very much intended here! π
RT @MaziyarPanahi: Perfect Sunday: I just used Kimi-K2 by @Kimi_Moonshot to vibe code a @Gradio app! π₯
You can use "Anycoder" Space by @_akhaliq hosted on @huggingface for free. It was super quick! π€
PS: I am aware of using Gradio to vibe code another Gradio! Pun very much intended here! π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @AdinaYakup: From paper to project page in one clickπ
AnyCoder π₯ turns research PDFs into structured, shareable project pages in seconds!
https://huggingface.co/spaces/akhaliq/anycoder
Powered by 8 SoTA open models on @huggingface
RT @AdinaYakup: From paper to project page in one clickπ
AnyCoder π₯ turns research PDFs into structured, shareable project pages in seconds!
https://huggingface.co/spaces/akhaliq/anycoder
Powered by 8 SoTA open models on @huggingface
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @vitrupo: Jack Dorsey says AI must be permissionless because constraint kills innovation.
Five CEOs shouldn't dictate what brings humanity forward.
Open source is the answer.
To protect ourselves, we have to race ahead. Eliminating single points of failure before they become civilization's choke points.
RT @vitrupo: Jack Dorsey says AI must be permissionless because constraint kills innovation.
Five CEOs shouldn't dictate what brings humanity forward.
Open source is the answer.
To protect ourselves, we have to race ahead. Eliminating single points of failure before they become civilization's choke points.
Hugging Face (Twitter)
RT @yagilb: I'm not sure how HF is paying for all those TBs going in and out, but at least now we're chipping in a little bit. Thanks @huggingface for being the great library of AI models for us all π
RT @yagilb: I'm not sure how HF is paying for all those TBs going in and out, but at least now we're chipping in a little bit. Thanks @huggingface for being the great library of AI models for us all π
βHugging Face (Twitter)
RT @HaihaoShen: π₯³INT4 model for updated Qwen3-235B-A22B:
vLLM MoE seems not working well; yet HF transformers can run pretty well.
RT @HaihaoShen: π₯³INT4 model for updated Qwen3-235B-A22B:
vLLM MoE seems not working well; yet HF transformers can run pretty well.
huggingface.co
Intel/Qwen3-235B-A22B-Instruct-2507-int4-mixed-rtn-AutoRound-inc Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
βHugging Face (Twitter)
RT @intrstllrninja: today i'm releasing 50k rows of tool-use reasoning dataset compilation on huggingface
includes following BFCL scenarios:
- single turn tool-use
- multiturn tool-use
- multistep tool-use
- relevance reasoning
https://huggingface.co/datasets/interstellarninja/hermes_reasoning_tool_use
RT @intrstllrninja: today i'm releasing 50k rows of tool-use reasoning dataset compilation on huggingface
includes following BFCL scenarios:
- single turn tool-use
- multiturn tool-use
- multistep tool-use
- relevance reasoning
https://huggingface.co/datasets/interstellarninja/hermes_reasoning_tool_use
huggingface.co
interstellarninja/hermes_reasoning_tool_use Β· Datasets at Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @vllm_project: The @huggingface Transformers βοΈ @vllm_project integration just leveled up: Vision-Language Models are now supported out of the box!
If the model is integrated into Transformers, you can now run it directly with vLLM.
https://github.com/vllm-project/vllm/pull/20543
Great work @RTurganbay π
RT @vllm_project: The @huggingface Transformers βοΈ @vllm_project integration just leveled up: Vision-Language Models are now supported out of the box!
If the model is integrated into Transformers, you can now run it directly with vLLM.
https://github.com/vllm-project/vllm/pull/20543
Great work @RTurganbay π
Hugging Face (Twitter)
RT @ClementDelangue: It's out! and you can already run inference on the HF model page thanks to @hyperbolic_labs! https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct https://twitter.com/ClementDelangue/status/1947753298879975771#m
RT @ClementDelangue: It's out! and you can already run inference on the HF model page thanks to @hyperbolic_labs! https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct https://twitter.com/ClementDelangue/status/1947753298879975771#m
Hugging Face (Twitter)
RT @itsPaulAi: Wait so Alibaba Qwen has just released ANOTHER model??
Qwen3-Coder is simply one of the best coding model we've ever seen.
β Still 100% open source
β Up to 1M context window π₯
β 35B active parameters
β Same performance as Sonnet 4
They're releasing a CLI tool as well β
RT @itsPaulAi: Wait so Alibaba Qwen has just released ANOTHER model??
Qwen3-Coder is simply one of the best coding model we've ever seen.
β Still 100% open source
β Up to 1M context window π₯
β 35B active parameters
β Same performance as Sonnet 4
They're releasing a CLI tool as well β
Hugging Face (Twitter)
RT @AdinaYakup: Qwen3-Coder π» agentic code model by @Alibaba_Qwen
https://huggingface.co/collections/Qwen/qwen3-coder-687fc861e53c939e52d52d10
β¨ 480B total, 35B activated MoE
β¨ Agentic Coding + Browser Use β Top code model performance
β¨ 256K context (up to 1M via Yarn) for repo-scale understanding
RT @AdinaYakup: Qwen3-Coder π» agentic code model by @Alibaba_Qwen
https://huggingface.co/collections/Qwen/qwen3-coder-687fc861e53c939e52d52d10
β¨ 480B total, 35B activated MoE
β¨ Agentic Coding + Browser Use β Top code model performance
β¨ 256K context (up to 1M via Yarn) for repo-scale understanding
Hugging Face (Twitter)
RT @cline: Live in Cline: Qwen3-235B-A22B-2507
Open source and with a 262k context window, the latest from Qwen is impressive on the benchmarks, but we're looking forward to its real world performance.
fyi, here's the model name in Cline:
qwen/qwen3-235b-a22b-07-25 https://twitter.com/Alibaba_Qwen/status/1947344511988076547#m
RT @cline: Live in Cline: Qwen3-235B-A22B-2507
Open source and with a 262k context window, the latest from Qwen is impressive on the benchmarks, but we're looking forward to its real world performance.
fyi, here's the model name in Cline:
qwen/qwen3-235b-a22b-07-25 https://twitter.com/Alibaba_Qwen/status/1947344511988076547#m
Hugging Face (Twitter)
RT @deedydas: The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
RT @deedydas: The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
Hugging Face (Twitter)
RT @Alibaba_Qwen: >>> Qwen3-Coder is here! β
Weβre releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic coding benchmarks among open models, including SWE-bench-Verified!!! π
Alongside the model, we're also open-sourcing a command-line tool for agentic coding: Qwen Code. Forked from Gemini Code, it includes custom prompts and function call protocols to fully unlock Qwen3-Coderβs capabilities. Qwen3-Coder works seamlessly with the communityβs best developer tools. As a foundation model, we hope it can be used anywhere across the digital world β Agentic Coding in the World!
π¬ Chat: chat.qwen.ai/
π Blog: https://qwenlm.github.io/blog/qwen3-coder/
π€ Model: https://hf.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
π€ Qwen Code: github.com/QwenLM/qwen-code
RT @Alibaba_Qwen: >>> Qwen3-Coder is here! β
Weβre releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic coding benchmarks among open models, including SWE-bench-Verified!!! π
Alongside the model, we're also open-sourcing a command-line tool for agentic coding: Qwen Code. Forked from Gemini Code, it includes custom prompts and function call protocols to fully unlock Qwen3-Coderβs capabilities. Qwen3-Coder works seamlessly with the communityβs best developer tools. As a foundation model, we hope it can be used anywhere across the digital world β Agentic Coding in the World!
π¬ Chat: chat.qwen.ai/
π Blog: https://qwenlm.github.io/blog/qwen3-coder/
π€ Model: https://hf.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
π€ Qwen Code: github.com/QwenLM/qwen-code
Hugging Face (Twitter)
RT @itsPaulAi: Alibaba Qwen has just released a non-thinking model even more powerful than Kimi K2...
And even better than Claude Opus 4 π€―
β 100% open source
β Only 22B active parameters
β Available for free in Qwen Chat
All the links below
RT @itsPaulAi: Alibaba Qwen has just released a non-thinking model even more powerful than Kimi K2...
And even better than Claude Opus 4 π€―
β 100% open source
β Only 22B active parameters
β Available for free in Qwen Chat
All the links below
Hugging Face (Twitter)
RT @reach_vb: Qwen on a rampage π₯ - Qwen3 Coder 480B (35B Active), beats Kimi K2 AND Claude Sonnet 4 - Apache 2.0 licensed!
> Upto 1M context
> Non-Reasoning
> Supports agentic and browser mode
So so excited for the smaller models in the series π€
RT @reach_vb: Qwen on a rampage π₯ - Qwen3 Coder 480B (35B Active), beats Kimi K2 AND Claude Sonnet 4 - Apache 2.0 licensed!
> Upto 1M context
> Non-Reasoning
> Supports agentic and browser mode
So so excited for the smaller models in the series π€
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: NEW: Higgs Audio V2 from @boson_ai open, unified TTS model w/ voice cloning, beats GPT 4o mini tts and ElevenLabs v2 π₯
> Trained on 10M hours (speech, music, events)
> Built on top of Llama 3.2 3B
> Works real-time and on edge
> Beats GPT-4o-mini-tts, ElevenLabs v2 in prosody & emotion Multi-speaker dialog
> Zero-shot voice cloning π€©
> Available on Hugging Face
Kudos to folks at Boson AI for releasing such a brilliant work and all the details around the model! π€
RT @reach_vb: NEW: Higgs Audio V2 from @boson_ai open, unified TTS model w/ voice cloning, beats GPT 4o mini tts and ElevenLabs v2 π₯
> Trained on 10M hours (speech, music, events)
> Built on top of Llama 3.2 3B
> Works real-time and on edge
> Beats GPT-4o-mini-tts, ElevenLabs v2 in prosody & emotion Multi-speaker dialog
> Zero-shot voice cloning π€©
> Available on Hugging Face
Kudos to folks at Boson AI for releasing such a brilliant work and all the details around the model! π€