Hugging Face (Twitter)
RT @ClementDelangue: Xet by Hugging Face is the most important AI technology that nobody is talking about!
Under the hood, it now powers 5M Xet-enabled AI models & datasets on HF which see hundreds of terabytes of uploads and downloads every single day.
What makes it super powerful is that it massively speeds up & reduces costs of data transfer thanks to methods like content-defined chunking (CDC). Instead of treating a file as an indivisible unit, CDC breaks files down into variable-sized chunks, using the data to define boundaries.
That's what allows @huggingface to offer a platform for 10 million AI builders in open-source at a fraction of the cost.
Thanks @xetdata team!
RT @ClementDelangue: Xet by Hugging Face is the most important AI technology that nobody is talking about!
Under the hood, it now powers 5M Xet-enabled AI models & datasets on HF which see hundreds of terabytes of uploads and downloads every single day.
What makes it super powerful is that it massively speeds up & reduces costs of data transfer thanks to methods like content-defined chunking (CDC). Instead of treating a file as an indivisible unit, CDC breaks files down into variable-sized chunks, using the data to define boundaries.
That's what allows @huggingface to offer a platform for 10 million AI builders in open-source at a fraction of the cost.
Thanks @xetdata team!
Hugging Face (Twitter)
RT @ClementDelangue: Granite Docling by @IBM is #3 trending on @huggingface.
This is a multimodal Image-Text-to-Text model engineered for efficient document conversion. It preserves the core features of Docling while maintaining seamless integration with DoclingDocuments to ensure full compatibility.
It builds upon the IDEFICS3 architecture, but introduces two key modifications: it replaces the vision encoder with siglip2-base-patch16-512 and substitutes the language model with a Granite 165M LLM. Try out our Granite-Docling-258 demo today.
License: Apache 2.0
Granite-docling-258M is fully integrated into the Docling pipelines, carrying over existing features while introducing a number of powerful new features, including:
🔢 Enhanced Equation Recognition: More accurate detection and formatting of mathematical formulas
🧩 Flexible Inference Modes: Choose between full-page inference, bbox-guided region inference
🧘 Improved Stability: Tends to avoid...
Перейти на оригинальный пост
RT @ClementDelangue: Granite Docling by @IBM is #3 trending on @huggingface.
This is a multimodal Image-Text-to-Text model engineered for efficient document conversion. It preserves the core features of Docling while maintaining seamless integration with DoclingDocuments to ensure full compatibility.
It builds upon the IDEFICS3 architecture, but introduces two key modifications: it replaces the vision encoder with siglip2-base-patch16-512 and substitutes the language model with a Granite 165M LLM. Try out our Granite-Docling-258 demo today.
License: Apache 2.0
Granite-docling-258M is fully integrated into the Docling pipelines, carrying over existing features while introducing a number of powerful new features, including:
🔢 Enhanced Equation Recognition: More accurate detection and formatting of mathematical formulas
🧩 Flexible Inference Modes: Choose between full-page inference, bbox-guided region inference
🧘 Improved Stability: Tends to avoid...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @Alibaba_Qwen: 🚀 Introducing Qwen3-Omni — the first natively end-to-end omni-modal AI unifying text, image, audio & video in one model — no modality trade-offs!
🏆 SOTA on 22/36 audio & AV benchmarks
🌍 119L text / 19L speech in / 10L speech out
⚡ 211ms latency | 🎧 30-min audio understanding
🎨 Fully customizable via system prompts
🔗 Built-in tool calling
🎤 Open-source Captioner model (low-hallucination!)
🌟 What’s Open-Sourced?
We’ve open-sourced Qwen3-Omni-30B-A3B-Instruct, Qwen3-Omni-30B-A3B-Thinking, and Qwen3-Omni-30B-A3B-Captioner, to empower developers to explore a variety of applications from instruction-following to creative tasks.
Try it now 👇
💬 Qwen Chat: https://chat.qwen.ai/?models=qwen3-omni-flash
💻 GitHub: github.com/QwenLM/Qwen3-Omni
🤗 HF Models: https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe
🤖 MS Models:
https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f
🎬 Demo: https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo
RT @Alibaba_Qwen: 🚀 Introducing Qwen3-Omni — the first natively end-to-end omni-modal AI unifying text, image, audio & video in one model — no modality trade-offs!
🏆 SOTA on 22/36 audio & AV benchmarks
🌍 119L text / 19L speech in / 10L speech out
⚡ 211ms latency | 🎧 30-min audio understanding
🎨 Fully customizable via system prompts
🔗 Built-in tool calling
🎤 Open-source Captioner model (low-hallucination!)
🌟 What’s Open-Sourced?
We’ve open-sourced Qwen3-Omni-30B-A3B-Instruct, Qwen3-Omni-30B-A3B-Thinking, and Qwen3-Omni-30B-A3B-Captioner, to empower developers to explore a variety of applications from instruction-following to creative tasks.
Try it now 👇
💬 Qwen Chat: https://chat.qwen.ai/?models=qwen3-omni-flash
💻 GitHub: github.com/QwenLM/Qwen3-Omni
🤗 HF Models: https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe
🤖 MS Models:
https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f
🎬 Demo: https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo
Hugging Face (Twitter)
RT @AdinaYakup: 3 releases in one day 🤯 just before Alibaba Cloud’s annual conference! @Alibaba_Qwen is on fire 🔥
huggingface.co/Qwen
✨ Qwen3 Omni: End-to-end omni model
✨ Qwen3 TTS: Supports CN/EN/IT/FR + 10 langs
✨ Qwen-Image-Edit-2509: Big upgrade from the previous version
Excited to see what’s coming in the next 3 days 👀
RT @AdinaYakup: 3 releases in one day 🤯 just before Alibaba Cloud’s annual conference! @Alibaba_Qwen is on fire 🔥
huggingface.co/Qwen
✨ Qwen3 Omni: End-to-end omni model
✨ Qwen3 TTS: Supports CN/EN/IT/FR + 10 langs
✨ Qwen-Image-Edit-2509: Big upgrade from the previous version
Excited to see what’s coming in the next 3 days 👀
huggingface.co
Qwen (Qwen)
Org profile for Qwen on Hugging Face, the AI community building the future.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @amir_mahla: LET’S GOOO 🔥 Just released Smol2Operator, a full open-source recipe for turning a 2.2B model into an agentic GUI coder, and all the tools you need to build your own 🫡
RT @amir_mahla: LET’S GOOO 🔥 Just released Smol2Operator, a full open-source recipe for turning a 2.2B model into an agentic GUI coder, and all the tools you need to build your own 🫡
Hugging Face (Twitter)
RT @ClementDelangue: Now #1 - with the holy trinity of trending artefacts that @IBM either led (Docling) or contributed to (finepdfs).
DocumentAI is back in fashion! https://twitter.com/ClementDelangue/status/1970225167939879088#m
RT @ClementDelangue: Now #1 - with the holy trinity of trending artefacts that @IBM either led (Docling) or contributed to (finepdfs).
DocumentAI is back in fashion! https://twitter.com/ClementDelangue/status/1970225167939879088#m
Hugging Face (Twitter)
RT @ben_burtenshaw: too much new learning material! we're releasing a few chapters of hard study on post training AI models. it covers all major aspects plus more to come.
- Evaluating Large Language models on benchmarks and custom use cases
- Preference Alignment with DPO
- Fine tuning Vision Language Models for tasks like DocQA and Browser control.
- Parameter Efficient Fine Tuning
- Supervised Fine-Tuning LLMs release two weeks ago
All this material is completely free. It all runs on colab or the hub. And you can get certificates for each chapter of the course!
RT @ben_burtenshaw: too much new learning material! we're releasing a few chapters of hard study on post training AI models. it covers all major aspects plus more to come.
- Evaluating Large Language models on benchmarks and custom use cases
- Preference Alignment with DPO
- Fine tuning Vision Language Models for tasks like DocQA and Browser control.
- Parameter Efficient Fine Tuning
- Supervised Fine-Tuning LLMs release two weeks ago
All this material is completely free. It all runs on colab or the hub. And you can get certificates for each chapter of the course!
Hugging Face (Twitter)
RT @alonsosilva: VB @reach_vb from @huggingface presenting the State of Open Weights LLMs - 2025 at @aiDotEngineer #AIEParis
RT @alonsosilva: VB @reach_vb from @huggingface presenting the State of Open Weights LLMs - 2025 at @aiDotEngineer #AIEParis
Hugging Face (Twitter)
RT @LeRobotHF: ✨ New in LeRobot ✨
We now officially support LIBERO, one of the largest open benchmark for Vision-Language-Action (VLA) policies with 130+ tasks 🤯
Why this matters:
🧩 Unified benchmark: evaluate any VLA policy under a common setup.
🛠️ Easy integration: just install lerobot, and you’re ready to run LIBERO tasks.
📊 Baseline condition: LIBERO is now the default benchmark for adding new VLAs to LeRobot.
🔗 Dataset: https://huggingface.co/datasets/HuggingFaceVLA/libero
📚 Docs: https://huggingface.co/docs/lerobot/en/libero
This is a huge step toward building the go-to evaluation hub for VLAs.
Let’s make robot learning as open and reproducible as NLP & CV. 💪
👉 Try it out today, share your runs, and let’s push forward the frontier of embodied AI together!
RT @LeRobotHF: ✨ New in LeRobot ✨
We now officially support LIBERO, one of the largest open benchmark for Vision-Language-Action (VLA) policies with 130+ tasks 🤯
Why this matters:
🧩 Unified benchmark: evaluate any VLA policy under a common setup.
🛠️ Easy integration: just install lerobot, and you’re ready to run LIBERO tasks.
📊 Baseline condition: LIBERO is now the default benchmark for adding new VLAs to LeRobot.
🔗 Dataset: https://huggingface.co/datasets/HuggingFaceVLA/libero
📚 Docs: https://huggingface.co/docs/lerobot/en/libero
This is a huge step toward building the go-to evaluation hub for VLAs.
Let’s make robot learning as open and reproducible as NLP & CV. 💪
👉 Try it out today, share your runs, and let’s push forward the frontier of embodied AI together!
Hugging Face (Twitter)
RT @alexandr_wang: new research from Meta FAIR: Code World Model (CWM), a 32B research model
we encourage the research community to research this open-weight model!
pass@1 evals, for the curious:
65.8 % on SWE-bench Verified
68.6 % on LiveCodeBench
96.6 % on Math-500
76.0 % on AIME 2024
🧵
RT @alexandr_wang: new research from Meta FAIR: Code World Model (CWM), a 32B research model
we encourage the research community to research this open-weight model!
pass@1 evals, for the curious:
65.8 % on SWE-bench Verified
68.6 % on LiveCodeBench
96.6 % on Math-500
76.0 % on AIME 2024
🧵
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: 🎨 Mesh Palettizer
I made a simple tool that converts textured 3D models -> solid-colored using a shared color atlas
🤗 Free and open source: https://huggingface.co/spaces/dylanebert/MeshPalettizer
RT @dylan_ebert_: 🎨 Mesh Palettizer
I made a simple tool that converts textured 3D models -> solid-colored using a shared color atlas
🤗 Free and open source: https://huggingface.co/spaces/dylanebert/MeshPalettizer
Hugging Face (Twitter)
RT @osanseviero: We just released SimpleQA Verified on Hugging Face 👀
A 1,000-prompt factuality benchmark designed to evaluate LLM knowledge, with balanced topics, removed bias, more challenging questions, and verified ground truths
https://huggingface.co/datasets/google/simpleqa-verified
RT @osanseviero: We just released SimpleQA Verified on Hugging Face 👀
A 1,000-prompt factuality benchmark designed to evaluate LLM knowledge, with balanced topics, removed bias, more challenging questions, and verified ground truths
https://huggingface.co/datasets/google/simpleqa-verified
huggingface.co
google/simpleqa-verified · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @alexandr_wang: Please check out the technical report, model weights, and code:
➡️ Read the technical report:
➡️Download the open weights: huggingface.co/facebook/cwm
➡️Download the code: https://github.com/facebookresearch/cwm
RT @alexandr_wang: Please check out the technical report, model weights, and code:
➡️ Read the technical report:
➡️Download the open weights: huggingface.co/facebook/cwm
➡️Download the code: https://github.com/facebookresearch/cwm
X (formerly Twitter)
Alexandr Wang (@alexandr_wang) on X
Please check out the technical report, model weights, and code:
➡️ Read the technical report: https://t.co/01M0PDhLSp
➡️Download the open weights: https://t.co/Pva8uzH3GU
➡️Download the code: https://t.co/SnHiiJTOUW
➡️ Read the technical report: https://t.co/01M0PDhLSp
➡️Download the open weights: https://t.co/Pva8uzH3GU
➡️Download the code: https://t.co/SnHiiJTOUW
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Xianbao_QIAN: The most underrated open source TTS FireRedTTS2 from Rednote has now released the official demo on @huggingface to try out.
Model & Demo link below:
RT @Xianbao_QIAN: The most underrated open source TTS FireRedTTS2 from Rednote has now released the official demo on @huggingface to try out.
Model & Demo link below:
Hugging Face (Twitter)
RT @ClementDelangue: We need better agent evaluations! Glad to have collaborated with @Meta Super Intelligence Lab to release Gaia2 and ARE!
GPT5 (high) from @OpenAI is leading on execution, search, ambiguity, adaptability and noise.
Kimi-K2 from @Kimi_Moonshot is leading open weight.
Full blogpost: huggingface.co/blog/gaia2
RT @ClementDelangue: We need better agent evaluations! Glad to have collaborated with @Meta Super Intelligence Lab to release Gaia2 and ARE!
GPT5 (high) from @OpenAI is leading on execution, search, ambiguity, adaptability and noise.
Kimi-K2 from @Kimi_Moonshot is leading open weight.
Full blogpost: huggingface.co/blog/gaia2
Hugging Face (Twitter)
RT @NVIDIAAIDev: 🎊1M reasons to celebrate.👏
Our developer community has taken NVIDIA Cosmos Reason to more than 1M downloads on @huggingface & the top spot on the Physical Reasoning Leaderboard.
Join developers using Cosmos Reason to teach AI agents and robots to think like humans:
⚡ Get started with Cosmos Reason 1 NIM, an easy-to-use microservice for AI model deployment: https://catalog.ngc.nvidia.com/orgs/nim/teams/nvidia/containers/cosmos-reason1-7b?version=1
📈 See the leaderboard: https://huggingface.co/spaces/facebook/physical_reasoning_leaderboard
RT @NVIDIAAIDev: 🎊1M reasons to celebrate.👏
Our developer community has taken NVIDIA Cosmos Reason to more than 1M downloads on @huggingface & the top spot on the Physical Reasoning Leaderboard.
Join developers using Cosmos Reason to teach AI agents and robots to think like humans:
⚡ Get started with Cosmos Reason 1 NIM, an easy-to-use microservice for AI model deployment: https://catalog.ngc.nvidia.com/orgs/nim/teams/nvidia/containers/cosmos-reason1-7b?version=1
📈 See the leaderboard: https://huggingface.co/spaces/facebook/physical_reasoning_leaderboard
Hugging Face (Twitter)
RT @maximelabonne: We're releasing a collection of tiny task-specific models 🥳
Want to do data extraction, translation, RAG, tool use, or math on a Raspberry Pi? We got you covered! ✅
Here are a few examples ↓
RT @maximelabonne: We're releasing a collection of tiny task-specific models 🥳
Want to do data extraction, translation, RAG, tool use, or math on a Raspberry Pi? We got you covered! ✅
Here are a few examples ↓
Hugging Face (Twitter)
RT @ClementDelangue: Really cool! Evaluation dataset on HF of course: https://huggingface.co/datasets/openai/gdpval Would be interesting to create a leaderboard on hf.co/spaces as well! https://twitter.com/OpenAI/status/1971249374077518226#m
RT @ClementDelangue: Really cool! Evaluation dataset on HF of course: https://huggingface.co/datasets/openai/gdpval Would be interesting to create a leaderboard on hf.co/spaces as well! https://twitter.com/OpenAI/status/1971249374077518226#m
Hugging Face (Twitter)
RT @linguist_cat: I have a new blog post about the so-called “tokenizer-free” approach to language modeling and why it’s not tokenizer-free at all. I also talk about why people hate tokenizers so much!
RT @linguist_cat: I have a new blog post about the so-called “tokenizer-free” approach to language modeling and why it’s not tokenizer-free at all. I also talk about why people hate tokenizers so much!