Hugging Face (Twitter)
RT @Ali_TongyiLab: 1/7 We're launching Tongyi DeepResearch, the first fully open-source Web Agent to achieve performance on par with OpenAI's Deep Research with only 30B (Activated 3B) parameters! Tongyi DeepResearch agent demonstrates state-of-the-art results, scoring 32.9 on Humanity's Last Exam, 45.3 on BrowseComp, and 75.0 on the xbench-DeepSearch benchmark.
RT @Ali_TongyiLab: 1/7 We're launching Tongyi DeepResearch, the first fully open-source Web Agent to achieve performance on par with OpenAI's Deep Research with only 30B (Activated 3B) parameters! Tongyi DeepResearch agent demonstrates state-of-the-art results, scoring 32.9 on Humanity's Last Exam, 45.3 on BrowseComp, and 75.0 on the xbench-DeepSearch benchmark.
Hugging Face (Twitter)
RT @nathanhabib1011: 🚀 Just updated lighteval’s readme—can’t believe we’ve grown to cover ~7,000 tasks 😳
with top-tier multilingual support 🌍
llm as judge 🤖
multiturn evals 🗣️
coding benchmarks 🧑💻
RT @nathanhabib1011: 🚀 Just updated lighteval’s readme—can’t believe we’ve grown to cover ~7,000 tasks 😳
with top-tier multilingual support 🌍
llm as judge 🤖
multiturn evals 🗣️
coding benchmarks 🧑💻
Hugging Face (Twitter)
RT @MaziyarPanahi: Introducing 90+ open-source, state‑of‑the‑art biomedical and clinical zero‑shot NER models on @HuggingFace by @OpenMed_AI
Apache‑2.0 licensed and ready to use
Built on GLiNER and covering 12+ biomedical datasets
🧵 (1/6)
RT @MaziyarPanahi: Introducing 90+ open-source, state‑of‑the‑art biomedical and clinical zero‑shot NER models on @HuggingFace by @OpenMed_AI
Apache‑2.0 licensed and ready to use
Built on GLiNER and covering 12+ biomedical datasets
🧵 (1/6)
Hugging Face (Twitter)
RT @_fracapuano: We're releasing an updated dataset format for @LeRobotHF, and it is built for scale. LeRobotDataset:v3 supports multi-million episode datasets and streaming, enabling better performance across the board
Learn more:
RT @_fracapuano: We're releasing an updated dataset format for @LeRobotHF, and it is built for scale. LeRobotDataset:v3 supports multi-million episode datasets and streaming, enabling better performance across the board
Learn more:
huggingface.co
`LeRobotDataset:v3.0`: Bringing large-scale datasets to `lerobot`
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @Weyaxi: The @huggingface followers leaderboard is BACK after a month 🚀
Top users:
1. @TheBlokeAI
2. @lvminzhang
3. @mervenoyann
4. @bartowski1182
5. @akhaliq
6. @ylecun
7. @fffiloni
8. @xenovacom
9. @Teknium1
10. @maximelabonne
11. @TheEricHartford
Fixed thanks to @charlesbben 🙌
RT @Weyaxi: The @huggingface followers leaderboard is BACK after a month 🚀
Top users:
1. @TheBlokeAI
2. @lvminzhang
3. @mervenoyann
4. @bartowski1182
5. @akhaliq
6. @ylecun
7. @fffiloni
8. @xenovacom
9. @Teknium1
10. @maximelabonne
11. @TheEricHartford
Fixed thanks to @charlesbben 🙌
Hugging Face (Twitter)
RT @mshuaibii: Excited to present the FAIR Chemistry Leaderboard - a centralized space for our team’s community benchmark efforts. We’re kicking things off today with the OMol25 leaderboard!
📊Leaderboard: https://huggingface.co/spaces/facebook/fairchem_leaderboard
🖥️Code: https://github.com/facebookresearch/fairchem
RT @mshuaibii: Excited to present the FAIR Chemistry Leaderboard - a centralized space for our team’s community benchmark efforts. We’re kicking things off today with the OMol25 leaderboard!
📊Leaderboard: https://huggingface.co/spaces/facebook/fairchem_leaderboard
🖥️Code: https://github.com/facebookresearch/fairchem
Hugging Face (Twitter)
RT @mervenoyann: after last time I have asked this question, we listened to you and shipped a ton of this so there it goes:
what can we improve to make it easier to build with @huggingface Hub and open-source libraries? ✨
your opinions matter a ton!
RT @mervenoyann: after last time I have asked this question, we listened to you and shipped a ton of this so there it goes:
what can we improve to make it easier to build with @huggingface Hub and open-source libraries? ✨
your opinions matter a ton!
Hugging Face (Twitter)
RT @Tu7uruu: 🚀 New dataset drop for speech & NLP folks!
OleSpeech-IV-2025-EN-AR-100 (100h)
🎤 Real, unprompted English convos
🗂️ Human transcripts + speaker turns
🔎 Overlaps & timestamps included
📂 Raw, uncompressed audio
Perfect for ASR, diarization & convo modeling 👌
RT @Tu7uruu: 🚀 New dataset drop for speech & NLP folks!
OleSpeech-IV-2025-EN-AR-100 (100h)
🎤 Real, unprompted English convos
🗂️ Human transcripts + speaker turns
🔎 Overlaps & timestamps included
📂 Raw, uncompressed audio
Perfect for ASR, diarization & convo modeling 👌
Hugging Face (Twitter)
RT @ClementDelangue: Unitree first open-source world-model on @huggingface!
UnifoLM-WMA-0 is Unitree‘s first open-source world-model–action architecture spanning multiple types of robotic embodiments, designed specifically for general-purpose robot learning.
Its core component is a world-model capable of understanding the physical interactions between robots and the environments.
This world-model provides two key functions: (a) Simulation Engine – operates as an interactive simulator to generate synthetic data for robot learning; (b) Policy Enhancement – connects with an action head and, by predicting future interaction processes with the world-model, further optimizes decision-making performance.
Link to the model: https://huggingface.co/unitreerobotics/UnifoLM-WMA-0
RT @ClementDelangue: Unitree first open-source world-model on @huggingface!
UnifoLM-WMA-0 is Unitree‘s first open-source world-model–action architecture spanning multiple types of robotic embodiments, designed specifically for general-purpose robot learning.
Its core component is a world-model capable of understanding the physical interactions between robots and the environments.
This world-model provides two key functions: (a) Simulation Engine – operates as an interactive simulator to generate synthetic data for robot learning; (b) Policy Enhancement – connects with an action head and, by predicting future interaction processes with the world-model, further optimizes decision-making performance.
Link to the model: https://huggingface.co/unitreerobotics/UnifoLM-WMA-0
Hugging Face (Twitter)
RT @Xianbao_QIAN: @AntLingAGI Ling-Flash-2.0 from Ant Finance just dropped on @huggingface
- 100B MoE, 6.1B active (4.8B non-embedding)
- 128k context length
- Trained on 20T+ tokens
- Base model available too
- Great performance on reasoning tasks
- MIT license
RT @Xianbao_QIAN: @AntLingAGI Ling-Flash-2.0 from Ant Finance just dropped on @huggingface
- 100B MoE, 6.1B active (4.8B non-embedding)
- 128k context length
- Trained on 20T+ tokens
- Base model available too
- Great performance on reasoning tasks
- MIT license
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @bfl_ml: FLUX.1 Kontext [dev] Hackathon is live!
$10K+ in prizes, open worldwide. 7 days to experiment and surprise us. Create LoRAs, build workflows, or try something totally unexpected.
Run it locally or through our partners @NVIDIA_AI_PC @fal @huggingface
Registration link below 👇
RT @bfl_ml: FLUX.1 Kontext [dev] Hackathon is live!
$10K+ in prizes, open worldwide. 7 days to experiment and surprise us. Create LoRAs, build workflows, or try something totally unexpected.
Run it locally or through our partners @NVIDIA_AI_PC @fal @huggingface
Registration link below 👇
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @DecartAI: We are building “Open Source Nano Banana for Video” - here is open source demo v0.1
We are open sourcing Lucy Edit, the first foundation model for text-guided video editing!
Get the model on @huggingface 🤗, API on @FAL, and nodes on @ComfyUI 🧵
RT @DecartAI: We are building “Open Source Nano Banana for Video” - here is open source demo v0.1
We are open sourcing Lucy Edit, the first foundation model for text-guided video editing!
Get the model on @huggingface 🤗, API on @FAL, and nodes on @ComfyUI 🧵
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Xianbao_QIAN: WAN 2.2 animate model & demo is now officially released on @huggingface
RT @Xianbao_QIAN: WAN 2.2 animate model & demo is now officially released on @huggingface
Hugging Face (Twitter)
RT @ariG23498: The new kid in the blog for experiment tracking is trackio.
And here you have @abidlabs talk about it.
https://www.youtube.com/watch?v=BdS8FgBqNOM?si=bbrQ89X7677rontC
RT @ariG23498: The new kid in the blog for experiment tracking is trackio.
And here you have @abidlabs talk about it.
https://www.youtube.com/watch?v=BdS8FgBqNOM?si=bbrQ89X7677rontC
YouTube
Trackio: A DROP-IN Replacement for W&B that is open-source and 💯 free
This video provides an overview and demo of Trackio, which is a free, experiment tracking library Hugging Face just released.
Install Trackio: pip install trackio
Documentation: https://huggingface.co/docs/trackio/index
Install Trackio: pip install trackio
Documentation: https://huggingface.co/docs/trackio/index
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @abidlabs: BOOM! A new, free experiment tracking library with identical syntax as wandb that makes it trivial as a drop-in replacement
RT @abidlabs: BOOM! A new, free experiment tracking library with identical syntax as wandb that makes it trivial as a drop-in replacement
Hugging Face (Twitter)
RT @_akhaliq: moondream3-preview is out on Hugging Face
vision language model with a mixture-of-experts architecture (9B total parameters, 2B active)
delivering sota visual reasoning while still being efficient and deployment-friendly
vibe coded a quick app for it in anycoder
RT @_akhaliq: moondream3-preview is out on Hugging Face
vision language model with a mixture-of-experts architecture (9B total parameters, 2B active)
delivering sota visual reasoning while still being efficient and deployment-friendly
vibe coded a quick app for it in anycoder
Hugging Face (Twitter)
RT @AdinaYakup: MiMo-Audio 🔊 Open audio model released by @Xiaomi
https://huggingface.co/collections/XiaomiMiMo/mimo-audio-68cc7202692c27dae881cce0
✨ 7B base & instruct - MIT license
✨ Pretrained on 100M+ hours
✨ Few-shot across speech & audio tasks
RT @AdinaYakup: MiMo-Audio 🔊 Open audio model released by @Xiaomi
https://huggingface.co/collections/XiaomiMiMo/mimo-audio-68cc7202692c27dae881cce0
✨ 7B base & instruct - MIT license
✨ Pretrained on 100M+ hours
✨ Few-shot across speech & audio tasks
huggingface.co
MiMo-Audio - a XiaomiMiMo Collection
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @adibvafa: CodonTransformer, our open-soruce model on @huggingface that optimizes genes for protein expression has passed 250,000+ downloads!
RT @adibvafa: CodonTransformer, our open-soruce model on @huggingface that optimizes genes for protein expression has passed 250,000+ downloads!
Hugging Face (Twitter)
RT @XiaomiMiMo: 👋 Say Hi to MiMo-Audio!
Our BREAKTHROUGH in general-purpose audio intelligence.
🎯 Scaling pretraining to 100M+ hours leads to EMERGENCE of few-shot generalization across diverse audio tasks!
🔥 Post-trained MiMo-Audio-7B-Instruct:
• crushes benchmarks: SOTA on MMSU, MMAU, MMAR, MMAU-Pro
• outperforms Gemini-2.5-Flash on audio understanding
• beats GPT-4o-Audio on complex reasoning tasks
💎 The best part? It's 100% OPEN-SOURCE
Everything from tokenizer to model to evaluations!
🤗 Try it in HF Space: https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat
📝 Tech Blog: https://xiaomimimo.github.io/MiMo-Audio-Demo/
RT @XiaomiMiMo: 👋 Say Hi to MiMo-Audio!
Our BREAKTHROUGH in general-purpose audio intelligence.
🎯 Scaling pretraining to 100M+ hours leads to EMERGENCE of few-shot generalization across diverse audio tasks!
🔥 Post-trained MiMo-Audio-7B-Instruct:
• crushes benchmarks: SOTA on MMSU, MMAU, MMAR, MMAU-Pro
• outperforms Gemini-2.5-Flash on audio understanding
• beats GPT-4o-Audio on complex reasoning tasks
💎 The best part? It's 100% OPEN-SOURCE
Everything from tokenizer to model to evaluations!
🤗 Try it in HF Space: https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat
📝 Tech Blog: https://xiaomimimo.github.io/MiMo-Audio-Demo/