This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-8B-A1B just dropped on @huggingface!
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
RT @maximelabonne: LFM2-8B-A1B just dropped on @huggingface!
8.3B params with only 1.5B active/token 🚀
> Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF
Hugging Face (Twitter)
RT @vanstriendaniel: DoTS.ocr from @xiaohongshu just got native @vllm_project support!
I built a UV script so you can run SOTA multilingual OCR in seconds with zero setup using @huggingface Jobs
Tested on 1800s library cards - works great ✨
RT @vanstriendaniel: DoTS.ocr from @xiaohongshu just got native @vllm_project support!
I built a UV script so you can run SOTA multilingual OCR in seconds with zero setup using @huggingface Jobs
Tested on 1800s library cards - works great ✨
Hugging Face (Twitter)
RT @ClementDelangue: The community added 1 million new repos (models, datasets, spaces) on @huggingface in the past 90 days! For context, it took six years to get to the first million repositories.
That's now a new repositories created on HF every 8 seconds.
What's cool is that:
- 100% are now powered by Xet, our technology for faster, cheaper, more efficient data transfer. Lots of exciting features to come unlocked by this like in-browser GGUF editing we just announced
- 40% are private repositories which shows that people are increasingly using the hub internally within their organizations to share weights, datasets and demos. Enterprise hub subscriptions are our fastest growing line of revenue.
Next milestone is to reach 10 million total repositories! Ultimately there will be more AI repositories than code repositories for all to build AI thanks to open-source.
Let's go!
RT @ClementDelangue: The community added 1 million new repos (models, datasets, spaces) on @huggingface in the past 90 days! For context, it took six years to get to the first million repositories.
That's now a new repositories created on HF every 8 seconds.
What's cool is that:
- 100% are now powered by Xet, our technology for faster, cheaper, more efficient data transfer. Lots of exciting features to come unlocked by this like in-browser GGUF editing we just announced
- 40% are private repositories which shows that people are increasingly using the hub internally within their organizations to share weights, datasets and demos. Enterprise hub subscriptions are our fastest growing line of revenue.
Next milestone is to reach 10 million total repositories! Ultimately there will be more AI repositories than code repositories for all to build AI thanks to open-source.
Let's go!
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @xenovacom: Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🤯
🔐 No data sent to a server (private & secure)
💰 Completely free... forever!
🔂 Docling ecosystem enables conversion to HTML, Markdown, JSON, and more!
Try out the demo! 👇
RT @xenovacom: Introducing Granite Docling WebGPU 🐣 State-of-the-art document parsing 100% locally in your browser! 🤯
🔐 No data sent to a server (private & secure)
💰 Completely free... forever!
🔂 Docling ecosystem enables conversion to HTML, Markdown, JSON, and more!
Try out the demo! 👇
Hugging Face (Twitter)
RT @ClementDelangue: Very cool paper! You can discuss with the author here: https://huggingface.co/papers/2510.04871
RT @ClementDelangue: Very cool paper! You can discuss with the author here: https://huggingface.co/papers/2510.04871
Hugging Face (Twitter)
RT @wjb_mattingly: I'm a huge fan of hf jobs. Working on a way to do the same thing on Yale's HPC. Should work for most HPCs using slurm. It handles dataset creation, job creation, job submission, ssh, etc. This is in no small part thanks to @vanstriendaniel 's great work and the team at @huggingface .
RT @wjb_mattingly: I'm a huge fan of hf jobs. Working on a way to do the same thing on Yale's HPC. Should work for most HPCs using slurm. It handles dataset creation, job creation, job submission, ssh, etc. This is in no small part thanks to @vanstriendaniel 's great work and the team at @huggingface .
Hugging Face (Twitter)
RT @ClementDelangue: The @LeRobotHF team is studying the @UnitreeRobotics G1 today in case you have any questions or fun stuff you want us to try!
RT @ClementDelangue: The @LeRobotHF team is studying the @UnitreeRobotics G1 today in case you have any questions or fun stuff you want us to try!
Hugging Face (Twitter)
RT @AntLingAGI: 🚀 Ling-1T — Trillion-Scale Efficient Reasoner
Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.
Highlights
→ Evo-CoT curriculum + Linguistics-Unit RL for scalable reasoning
→ Strong efficiency–accuracy balance on complex reasoning tasks
→ Advanced visual understanding + front-end code generation via Syntax–Function–Aesthetics reward
→ Emergent tool-use ability (≈ 70 %) with minimal instruction tuning
→ FP8 mixed-precision + Ling Scaling Law → efficient trillion-scale training
Efficient Thinking · Precise Reasoning
Ling-1T extends the Pareto frontier of reasoning accuracy vs. cost —
a new milestone in open-source trillion-scale intelligence.
RT @AntLingAGI: 🚀 Ling-1T — Trillion-Scale Efficient Reasoner
Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series —
1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens.
Highlights
→ Evo-CoT curriculum + Linguistics-Unit RL for scalable reasoning
→ Strong efficiency–accuracy balance on complex reasoning tasks
→ Advanced visual understanding + front-end code generation via Syntax–Function–Aesthetics reward
→ Emergent tool-use ability (≈ 70 %) with minimal instruction tuning
→ FP8 mixed-precision + Ling Scaling Law → efficient trillion-scale training
Efficient Thinking · Precise Reasoning
Ling-1T extends the Pareto frontier of reasoning accuracy vs. cost —
a new milestone in open-source trillion-scale intelligence.
Hugging Face (Twitter)
RT @LeRobotHF: 🚀 Big update for LeRobot!
We've launched a new plugin system to support third-party hardware. Now you can integrate any robot, camera, or teleoperator with a simple 'pip install', no need to modify the core library.
This makes open robotics development more extensible, scalable, and community-friendly.
Learn how to create your own plugin: https://huggingface.co/docs/lerobot/integrate_hardware#using-your-own-lerobot-devices-
RT @LeRobotHF: 🚀 Big update for LeRobot!
We've launched a new plugin system to support third-party hardware. Now you can integrate any robot, camera, or teleoperator with a simple 'pip install', no need to modify the core library.
This makes open robotics development more extensible, scalable, and community-friendly.
Learn how to create your own plugin: https://huggingface.co/docs/lerobot/integrate_hardware#using-your-own-lerobot-devices-
Hugging Face (Twitter)
RT @Thom_Wolf: LeRobot becoming an easy-to-install alternative to ROS (Robot Operating System) https://twitter.com/LeRobotHF/status/1975930970575397332#m
RT @Thom_Wolf: LeRobot becoming an easy-to-install alternative to ROS (Robot Operating System) https://twitter.com/LeRobotHF/status/1975930970575397332#m
Hugging Face (Twitter)
RT @mervenoyann: we're celebrating halloween with @togethercompute at @huggingface 🤗🎃
join us in this fine-tuning workshop at our Paris office 🇫🇷
we'll have speakers from Together and our own @SergioPaniego to talk about fine-tuning & alignment 🛠️
find detailed agenda on the next one ⤵️
RT @mervenoyann: we're celebrating halloween with @togethercompute at @huggingface 🤗🎃
join us in this fine-tuning workshop at our Paris office 🇫🇷
we'll have speakers from Together and our own @SergioPaniego to talk about fine-tuning & alignment 🛠️
find detailed agenda on the next one ⤵️
Hugging Face (Twitter)
RT @reach_vb: The Hugging Face Hub team is on a tear recently:
> You can create custom apps with domains on spaces
> Edit GGUF metadata on the Fly
> 100% of the Hub is powered by Xet - faster, efficient
> Responses API support for ALL Inference Providers
> MCP-UI support for HF MCP Server
> Search papers based on the Org
> Showcase repository size on the UI
and a lot more - excited for the coming weeks/ months as we continue to improve the overall UX! 🤗
RT @reach_vb: The Hugging Face Hub team is on a tear recently:
> You can create custom apps with domains on spaces
> Edit GGUF metadata on the Fly
> 100% of the Hub is powered by Xet - faster, efficient
> Responses API support for ALL Inference Providers
> MCP-UI support for HF MCP Server
> Search papers based on the Org
> Showcase repository size on the UI
and a lot more - excited for the coming weeks/ months as we continue to improve the overall UX! 🤗
Hugging Face (Twitter)
RT @victormustar: Microsoft did something interesting here 👀
“Unlike typical LLMs that are trained to play the role of the "assistant" in conversation, we trained UserLM-8b to simulate the “user” role in conversation”
https://huggingface.co/microsoft/UserLM-8b
RT @victormustar: Microsoft did something interesting here 👀
“Unlike typical LLMs that are trained to play the role of the "assistant" in conversation, we trained UserLM-8b to simulate the “user” role in conversation”
https://huggingface.co/microsoft/UserLM-8b
huggingface.co
microsoft/UserLM-8b · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @ClementDelangue: Refreshing to see @neuphonicspeech, a London-based seed startup that raised just a few million, top the most trending models on @huggingface today. They manage to stand out amongst 2M public models & giant corporations from the US and China.
Good example that everyone can contribute meaningfully to open-source (and get great visibility and credibility thanks to it) no matter their size, location or compute budgets. We need more of this!
RT @ClementDelangue: Refreshing to see @neuphonicspeech, a London-based seed startup that raised just a few million, top the most trending models on @huggingface today. They manage to stand out amongst 2M public models & giant corporations from the US and China.
Good example that everyone can contribute meaningfully to open-source (and get great visibility and credibility thanks to it) no matter their size, location or compute budgets. We need more of this!
Hugging Face (Twitter)
RT @ClementDelangue: So proud to see Reachy Mini named one of the Best Inventions of 2025 by @TIME!
Huge credit to the @pollenrobotics and @huggingface teams, turning a concept into thousands of units sold and shipped in under 6 months.
We might not be as slick as some other robotics companies (we sure don't do such good marketing videos and demos), but if we hit 100,000 Reachy Minis next year and 1 million by 2027, we’ll have a real shot at transforming robotics and AI through open-source and collaboration.
We’re just getting started 🦾🦾🦾
RT @ClementDelangue: So proud to see Reachy Mini named one of the Best Inventions of 2025 by @TIME!
Huge credit to the @pollenrobotics and @huggingface teams, turning a concept into thousands of units sold and shipped in under 6 months.
We might not be as slick as some other robotics companies (we sure don't do such good marketing videos and demos), but if we hit 100,000 Reachy Minis next year and 1 million by 2027, we’ll have a real shot at transforming robotics and AI through open-source and collaboration.
We’re just getting started 🦾🦾🦾
Hugging Face (Twitter)
RT @xeophon_: nvidia is the western qwen in terms of open releases but yall are not ready for this conversation
RT @xeophon_: nvidia is the western qwen in terms of open releases but yall are not ready for this conversation