Hugging Face (Twitter)
RT @ngxson: Long-awaited feature has dropped! You can now edit GGUF metadata directly from Hugging Face, without having to download the model locally 🔥
Huge kudos to @mishig25 for implementing this! ❤️
RT @ngxson: Long-awaited feature has dropped! You can now edit GGUF metadata directly from Hugging Face, without having to download the model locally 🔥
Huge kudos to @mishig25 for implementing this! ❤️
Hugging Face (Twitter)
RT @ArtificialAnlys: Recent open weights releases are reducing the gap to proprietary frontier models on agentic workflows
On the Terminal-Bench Hard evaluation for agentic coding and terminal use, open-weights models such as DeepSeek V3.2 Exp, Kimi K2 0905, and GLM-4.6 have made large strides, with DeepSeek surpassing Gemini 2.5 Pro. These advances reflect significantly higher capability for use in coding and other agent use cases, and developers have a wider range of model options than ever for these applications.
See below for our analysis of the price and performance of providers to help you make use of these leading models 👇
RT @ArtificialAnlys: Recent open weights releases are reducing the gap to proprietary frontier models on agentic workflows
On the Terminal-Bench Hard evaluation for agentic coding and terminal use, open-weights models such as DeepSeek V3.2 Exp, Kimi K2 0905, and GLM-4.6 have made large strides, with DeepSeek surpassing Gemini 2.5 Pro. These advances reflect significantly higher capability for use in coding and other agent use cases, and developers have a wider range of model options than ever for these applications.
See below for our analysis of the price and performance of providers to help you make use of these leading models 👇
Hugging Face (Twitter)
RT @TheZachMueller: Smol MoE's are here https://huggingface.co/LiquidAI/LFM2-8B-A1B
RT @TheZachMueller: Smol MoE's are here https://huggingface.co/LiquidAI/LFM2-8B-A1B
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-8B-A1B just dropped on @huggingface!
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
RT @maximelabonne: LFM2-8B-A1B just dropped on @huggingface!
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
Hugging Face (Twitter)
RT @vanstriendaniel: DoTS.ocr from @xiaohongshu just got native @vllm_project support!
I built a UV script so you can run SOTA multilingual OCR in seconds with zero setup using @huggingface Jobs
Tested on 1800s library cards - works great ✨
RT @vanstriendaniel: DoTS.ocr from @xiaohongshu just got native @vllm_project support!
I built a UV script so you can run SOTA multilingual OCR in seconds with zero setup using @huggingface Jobs
Tested on 1800s library cards - works great ✨
Hugging Face (Twitter)
RT @ClementDelangue: The community added 1 million new repos (models, datasets, spaces) on @huggingface in the past 90 days! For context, it took six years to get to the first million repositories.
That's now a new repositories created on HF every 8 seconds.
What's cool is that:
- 100% are now powered by Xet, our technology for faster, cheaper, more efficient data transfer. Lots of exciting features to come unlocked by this like in-browser GGUF editing we just announced
- 40% are private repositories which shows that people are increasingly using the hub internally within their organizations to share weights, datasets and demos. Enterprise hub subscriptions are our fastest growing line of revenue.
Next milestone is to reach 10 million total repositories! Ultimately there will be more AI repositories than code repositories for all to build AI thanks to open-source.
Let's go!
RT @ClementDelangue: The community added 1 million new repos (models, datasets, spaces) on @huggingface in the past 90 days! For context, it took six years to get to the first million repositories.
That's now a new repositories created on HF every 8 seconds.
What's cool is that:
- 100% are now powered by Xet, our technology for faster, cheaper, more efficient data transfer. Lots of exciting features to come unlocked by this like in-browser GGUF editing we just announced
- 40% are private repositories which shows that people are increasingly using the hub internally within their organizations to share weights, datasets and demos. Enterprise hub subscriptions are our fastest growing line of revenue.
Next milestone is to reach 10 million total repositories! Ultimately there will be more AI repositories than code repositories for all to build AI thanks to open-source.
Let's go!
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @xenovacom: Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🤯
🔐 No data sent to a server (private & secure)
💰 Completely free... forever!
🔂 Docling ecosystem enables conversion to HTML, Markdown, JSON, and more!
Try out the demo! 👇
RT @xenovacom: Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🤯
🔐 No data sent to a server (private & secure)
💰 Completely free... forever!
🔂 Docling ecosystem enables conversion to HTML, Markdown, JSON, and more!
Try out the demo! 👇
Hugging Face (Twitter)
RT @ClementDelangue: Very cool paper! You can discuss with the author here: https://huggingface.co/papers/2510.04871
RT @ClementDelangue: Very cool paper! You can discuss with the author here: https://huggingface.co/papers/2510.04871
Hugging Face (Twitter)
RT @wjb_mattingly: I'm a huge fan of hf jobs. Working on a way to do the same thing on Yale's HPC. Should work for most HPCs using slurm. It handles dataset creation, job creation, job submission, ssh, etc. This is in no small part thanks to @vanstriendaniel 's great work and the team at @huggingface .
RT @wjb_mattingly: I'm a huge fan of hf jobs. Working on a way to do the same thing on Yale's HPC. Should work for most HPCs using slurm. It handles dataset creation, job creation, job submission, ssh, etc. This is in no small part thanks to @vanstriendaniel 's great work and the team at @huggingface .
Hugging Face (Twitter)
RT @ClementDelangue: The @LeRobotHF team is studying the @UnitreeRobotics G1 today in case you have any questions or fun stuff you want us to try!
RT @ClementDelangue: The @LeRobotHF team is studying the @UnitreeRobotics G1 today in case you have any questions or fun stuff you want us to try!
Hugging Face (Twitter)
RT @AntLingAGI: 🚀 Ling-1T — Trillion-Scale Efficient Reasoner
Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.
Highlights
→ Evo-CoT curriculum + Linguistics-Unit RL for scalable reasoning
→ Strong efficiency–accuracy balance on complex reasoning tasks
→ Advanced visual understanding + front-end code generation via Syntax–Function–Aesthetics reward
→ Emergent tool-use ability (≈ 70 %) with minimal instruction tuning
→ FP8 mixed-precision + Ling Scaling Law → efficient trillion-scale training
Efficient Thinking · Precise Reasoning
Ling-1T extends the Pareto frontier of reasoning accuracy vs. cost —
a new milestone in open-source trillion-scale intelligence.
RT @AntLingAGI: 🚀 Ling-1T — Trillion-Scale Efficient Reasoner
Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.
Highlights
→ Evo-CoT curriculum + Linguistics-Unit RL for scalable reasoning
→ Strong efficiency–accuracy balance on complex reasoning tasks
→ Advanced visual understanding + front-end code generation via Syntax–Function–Aesthetics reward
→ Emergent tool-use ability (≈ 70 %) with minimal instruction tuning
→ FP8 mixed-precision + Ling Scaling Law → efficient trillion-scale training
Efficient Thinking · Precise Reasoning
Ling-1T extends the Pareto frontier of reasoning accuracy vs. cost —
a new milestone in open-source trillion-scale intelligence.
Hugging Face (Twitter)
RT @LeRobotHF: 🚀 Big update for LeRobot!
We've launched a new plugin system to support third-party hardware. Now you can integrate any robot, camera, or teleoperator with a simple 'pip install', no need to modify the core library.
This makes open robotics development more extensible, scalable, and community-friendly.
Learn how to create your own plugin: https://huggingface.co/docs/lerobot/integrate_hardware#using-your-own-lerobot-devices-
RT @LeRobotHF: 🚀 Big update for LeRobot!
We've launched a new plugin system to support third-party hardware. Now you can integrate any robot, camera, or teleoperator with a simple 'pip install', no need to modify the core library.
This makes open robotics development more extensible, scalable, and community-friendly.
Learn how to create your own plugin: https://huggingface.co/docs/lerobot/integrate_hardware#using-your-own-lerobot-devices-
Hugging Face (Twitter)
RT @Thom_Wolf: LeRobot becoming an easy-to-install alternative to ROS (Robot Operating System) https://twitter.com/LeRobotHF/status/1975930970575397332#m
RT @Thom_Wolf: LeRobot becoming an easy-to-install alternative to ROS (Robot Operating System) https://twitter.com/LeRobotHF/status/1975930970575397332#m
Hugging Face (Twitter)
RT @mervenoyann: we're celebrating halloween with @togethercompute at @huggingface 🤗🎃
join us in this fine-tuning workshop at our Paris office 🇫🇷
we'll have speakers from Together and our own @SergioPaniego to talk about fine-tuning & alignment 🛠️
find detailed agenda on the next one ⤵️
RT @mervenoyann: we're celebrating halloween with @togethercompute at @huggingface 🤗🎃
join us in this fine-tuning workshop at our Paris office 🇫🇷
we'll have speakers from Together and our own @SergioPaniego to talk about fine-tuning & alignment 🛠️
find detailed agenda on the next one ⤵️
Hugging Face (Twitter)
RT @reach_vb: The Hugging Face Hub team is on a tear recently:
> You can create custom apps with domains on spaces
> Edit GGUF metadata on the Fly
> 100% of the Hub is powered by Xet - faster, efficient
> Responses API support for ALL Inference Providers
> MCP-UI support for HF MCP Server
> Search papers based on the Org
> Showcase repository size on the UI
and a lot more - excited for the coming weeks/ months as we continue to improve the overall UX! 🤗
RT @reach_vb: The Hugging Face Hub team is on a tear recently:
> You can create custom apps with domains on spaces
> Edit GGUF metadata on the Fly
> 100% of the Hub is powered by Xet - faster, efficient
> Responses API support for ALL Inference Providers
> MCP-UI support for HF MCP Server
> Search papers based on the Org
> Showcase repository size on the UI
and a lot more - excited for the coming weeks/ months as we continue to improve the overall UX! 🤗