Hugging Face (Twitter)
RT @ClementDelangue: Beautiful work from @sundarpichai @demishassabis and team with open weights on HF:
I'm so excited about the application of AI for biology and chemistry, especially in the open like this for all to benefit! https://twitter.com/sundarpichai/status/1978507110477332582#m
RT @ClementDelangue: Beautiful work from @sundarpichai @demishassabis and team with open weights on HF:
I'm so excited about the application of AI for biology and chemistry, especially in the open like this for all to benefit! https://twitter.com/sundarpichai/status/1978507110477332582#m
huggingface.co
vandijklab/C2S-Scale-Gemma-2-27B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @HuggingPapers: Facebook just dropped HoneyBee, a massive new dataset for vision-language reasoning, on Hugging Face!
It contains 2.5M high-quality examples with chain-of-thought solutions, pushing VLM performance to new SOTA.
RT @HuggingPapers: Facebook just dropped HoneyBee, a massive new dataset for vision-language reasoning, on Hugging Face!
It contains 2.5M high-quality examples with chain-of-thought solutions, pushing VLM performance to new SOTA.
Hugging Face (Twitter)
RT @shreyasgite: Data collection is a high-value task. Even the Chancellor of Germany has to do his part. Friedrich Merz with @LeRobotHF SO100.
RT @shreyasgite: Data collection is a high-value task. Even the Chancellor of Germany has to do his part. Friedrich Merz with @LeRobotHF SO100.
Hugging Face (Twitter)
RT @Xianbao_QIAN: How far has embodied AI gone
Check out this first real world VLA manipulation evaluation, ran by tirelessly ARX, Franka, UR5 and Aloha arms
Table 30 are trivial for human but still very difficult for robots. pi0.5 is leading but still scores <50%
A long way to go! Link below:
RT @Xianbao_QIAN: How far has embodied AI gone
Check out this first real world VLA manipulation evaluation, ran by tirelessly ARX, Franka, UR5 and Aloha arms
Table 30 are trivial for human but still very difficult for robots. pi0.5 is leading but still scores <50%
A long way to go! Link below:
Hugging Face (Twitter)
RT @Xianbao_QIAN: PaddleOCR-VL-0.9B is mind blowing and it supports 109 languages!
Check it out on HF demo:
RT @Xianbao_QIAN: PaddleOCR-VL-0.9B is mind blowing and it supports 109 languages!
Check it out on HF demo:
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: BOOM: We've just re-launched HuggingChat v2 💬 - 115 open source models in a single interface is stronger than ChatGPT 🔥
Introducing: HuggingChat Omni 💫
> Select the best model for every prompt automatically 🚀
> Automatic model selection for your queries
> 115 models available across 15 providers including @GroqInc, @CerebrasSystems, @togethercompute, @novita_labs, and more
Powered by HF Inference Providers — access hundreds of AI models using only world-class inference providers
Omni uses a policy-based approach to model selection (after experimenting with different methods). Credits to @katanemo_ for their small routing model: katanemo/Arch-Router-1.5B
Coming next:
• MCP support with web search
• File support
• Omni routing selection improvements
• Customizable policies
Try it out today at hf[dot] co/chat 🤗
RT @reach_vb: BOOM: We've just re-launched HuggingChat v2 💬 - 115 open source models in a single interface is stronger than ChatGPT 🔥
Introducing: HuggingChat Omni 💫
> Select the best model for every prompt automatically 🚀
> Automatic model selection for your queries
> 115 models available across 15 providers including @GroqInc, @CerebrasSystems, @togethercompute, @novita_labs, and more
Powered by HF Inference Providers — access hundreds of AI models using only world-class inference providers
Omni uses a policy-based approach to model selection (after experimenting with different methods). Credits to @katanemo_ for their small routing model: katanemo/Arch-Router-1.5B
Coming next:
• MCP support with web search
• File support
• Omni routing selection improvements
• Customizable policies
Try it out today at hf[dot] co/chat 🤗
Hugging Face (Twitter)
RT @vanstriendaniel: Not enough people know about/use PRs for datasets on @huggingface. For many dynamic datasets, this can be a good workflow for versioning datasets and improving them over time.
RT @vanstriendaniel: Not enough people know about/use PRs for datasets on @huggingface. For many dynamic datasets, this can be a good workflow for versioning datasets and improving them over time.
Hugging Face (Twitter)
RT @LeRobotHF: 🚀 New in LeRobot: Multi-GPU training is now supported!
We’ve integrated 🤗 Accelerate into our training pipeline, making it simple to scale your experiments across multiple GPUs with just one command.
Whether you’re fine-tuning policies or running large-scale robot learning, LeRobot now handles distributed training easily.
👉 PR: https://github.com/huggingface/lerobot/pull/2154
Let’s accelerate robot learning together ⚙️🤖
RT @LeRobotHF: 🚀 New in LeRobot: Multi-GPU training is now supported!
We’ve integrated 🤗 Accelerate into our training pipeline, making it simple to scale your experiments across multiple GPUs with just one command.
Whether you’re fine-tuning policies or running large-scale robot learning, LeRobot now handles distributed training easily.
👉 PR: https://github.com/huggingface/lerobot/pull/2154
Let’s accelerate robot learning together ⚙️🤖
Hugging Face (Twitter)
RT @HuggingPapers: ByteDance just released Sa2VA on Hugging Face.
This MLLM marries SAM2 with LLaVA for dense grounded understanding of images & videos,
offering SOTA performance in segmentation, grounding, and QA.
https://huggingface.co/ByteDance/Sa2VA-InternVL3-14B
RT @HuggingPapers: ByteDance just released Sa2VA on Hugging Face.
This MLLM marries SAM2 with LLaVA for dense grounded understanding of images & videos,
offering SOTA performance in segmentation, grounding, and QA.
https://huggingface.co/ByteDance/Sa2VA-InternVL3-14B
huggingface.co
ByteDance/Sa2VA-InternVL3-14B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @abidlabs: Why did we build yet another experiment tracking library?
We built @TrackioApp because experiment tracking shouldn’t be complicated. Most tools are cloud-heavy, bloated, or hard to customize. Trackio is different: it’s lightweight, local-first, and free.
Run it on your machine, store logs in SQLite, visualize experiments instantly with a clean dashboard, or deploy online if you want. Embed dashboards anywhere, from blogs to internal docs. The API mirrors popular logging libraries, so you can switch without rewriting your code.
At under 5,000 lines of Python, Trackio is small, open-source, and designed for extensibility. Fork it, tweak it, add what matters to you. No limits, no lock-in, just fast, flexible experiment tracking for ML developers who want control.
Give it a star ⭐:
RT @abidlabs: Why did we build yet another experiment tracking library?
We built @TrackioApp because experiment tracking shouldn’t be complicated. Most tools are cloud-heavy, bloated, or hard to customize. Trackio is different: it’s lightweight, local-first, and free.
Run it on your machine, store logs in SQLite, visualize experiments instantly with a clean dashboard, or deploy online if you want. Embed dashboards anywhere, from blogs to internal docs. The API mirrors popular logging libraries, so you can switch without rewriting your code.
At under 5,000 lines of Python, Trackio is small, open-source, and designed for extensibility. Fork it, tweak it, add what matters to you. No limits, no lock-in, just fast, flexible experiment tracking for ML developers who want control.
Give it a star ⭐:
GitHub
GitHub - gradio-app/trackio: A lightweight, local-first, and 🆓 experiment tracking library from Hugging Face 🤗
A lightweight, local-first, and 🆓 experiment tracking library from Hugging Face 🤗 - gradio-app/trackio
Hugging Face (Twitter)
RT @engineerrprompt: Some interesting insights on open models/repos
- 1 million new open-source AI repos landed on @huggingface in only 90 days.
- @nvidia , historically a hardware vendor, is now the single largest contributor of open AI models (Nemotron, Cosmos, Gr00t, BioNeMo, Canary).
- Chinese labs have moved from followers to co-leaders: Alibaba’s @Alibaba_Qwen , @deepseek_ai , Baidu, Tencent, MiniMax, Z.AI, @ByteDanceOSS , @Kimi_Moonshot and Zhipu all ship updates that rival or beat Western models on public leaderboards.
- DeepSeek alone has >100 k Hugging Face followers and is pushing iterative V3 drops.
- Fine-tuning is democratized—hundreds of LoRA adapters appear daily, letting individuals tune foundation models with only hundreds of samples.
- Europe’s footprint is shrinking: outside @MistralAI Magistral and Stability’s image models, almost no EU players are visible in the open-source explosion.
- Daily download counts for top repos now...
Перейти на оригинальный пост
RT @engineerrprompt: Some interesting insights on open models/repos
- 1 million new open-source AI repos landed on @huggingface in only 90 days.
- @nvidia , historically a hardware vendor, is now the single largest contributor of open AI models (Nemotron, Cosmos, Gr00t, BioNeMo, Canary).
- Chinese labs have moved from followers to co-leaders: Alibaba’s @Alibaba_Qwen , @deepseek_ai , Baidu, Tencent, MiniMax, Z.AI, @ByteDanceOSS , @Kimi_Moonshot and Zhipu all ship updates that rival or beat Western models on public leaderboards.
- DeepSeek alone has >100 k Hugging Face followers and is pushing iterative V3 drops.
- Fine-tuning is democratized—hundreds of LoRA adapters appear daily, letting individuals tune foundation models with only hundreds of samples.
- Europe’s footprint is shrinking: outside @MistralAI Magistral and Stability’s image models, almost no EU players are visible in the open-source explosion.
- Daily download counts for top repos now...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @Meituan_LongCat: 🎉 LongCat-Audio-Codec is officially OPEN SOURCED! 🚀
-an audio codec solution optimized specifically for Speech LLMs.
Key Breakthroughs:
1. Dual Tokens: Semantic and Acoustic Tokens are extracted in parallel at a low frame rate (16.7Hz / 60ms).This ensures both efficient modeling and full information integrity.
2. Ultra-Efficiency: LongCat-Audio-Codec maintains high intelligibility even at an extremely low bitrate, such as 0.43 kbps.
3. Real-Time Ready: Features a low-latency streaming decoder architecture. Latency is controlled to the hundred-millisecond level for real-time interaction.
The integration of super-resolution in the decoder further enhances audio quality without extra models! This solution lowers technical barriers and optimizes resource efficiency for mobile/embedded Speech LLM deployment.
🔗 Code:
Github: https://github.com/meituan-longcat/LongCat-Audio-Codec
Huggingface: https://huggingface.co/meituan-longcat/LongCat-Audio-Codec
RT @Meituan_LongCat: 🎉 LongCat-Audio-Codec is officially OPEN SOURCED! 🚀
-an audio codec solution optimized specifically for Speech LLMs.
Key Breakthroughs:
1. Dual Tokens: Semantic and Acoustic Tokens are extracted in parallel at a low frame rate (16.7Hz / 60ms).This ensures both efficient modeling and full information integrity.
2. Ultra-Efficiency: LongCat-Audio-Codec maintains high intelligibility even at an extremely low bitrate, such as 0.43 kbps.
3. Real-Time Ready: Features a low-latency streaming decoder architecture. Latency is controlled to the hundred-millisecond level for real-time interaction.
The integration of super-resolution in the decoder further enhances audio quality without extra models! This solution lowers technical barriers and optimizes resource efficiency for mobile/embedded Speech LLM deployment.
🔗 Code:
Github: https://github.com/meituan-longcat/LongCat-Audio-Codec
Huggingface: https://huggingface.co/meituan-longcat/LongCat-Audio-Codec
Hugging Face (Twitter)
RT @ClementDelangue: The main breakthrough of GPT-5 was to route your messages between a couple of different models to give you the best, cheapest & fastest answer possible.
This is cool but imagine if you could do this not only for a couple of models but hundreds of them, big and small, fast and slow, in any language or specialized for any task - all at inference time. This is what we're introducing with HuggingChat Omni, powered by over 100 open-source models including gpt-oss, deepseek, qwen, kimi, smolLM, gemma, aya and many more already!
And this is just the beginning as there are over 2 millions open models not only for text but image, audio, video, biology, chemistry, time-series and more on @huggingface!
RT @ClementDelangue: The main breakthrough of GPT-5 was to route your messages between a couple of different models to give you the best, cheapest & fastest answer possible.
This is cool but imagine if you could do this not only for a couple of models but hundreds of them, big and small, fast and slow, in any language or specialized for any task - all at inference time. This is what we're introducing with HuggingChat Omni, powered by over 100 open-source models including gpt-oss, deepseek, qwen, kimi, smolLM, gemma, aya and many more already!
And this is just the beginning as there are over 2 millions open models not only for text but image, audio, video, biology, chemistry, time-series and more on @huggingface!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @victormustar: Introducing: HuggingChat Omni 💫
Select the best model for every prompt automatically 🚀
- Automatic model selection for your queries
- 115 models available across 15 providers
Available now all Hugging Face users. 100% open source.
RT @victormustar: Introducing: HuggingChat Omni 💫
Select the best model for every prompt automatically 🚀
- Automatic model selection for your queries
- 115 models available across 15 providers
Available now all Hugging Face users. 100% open source.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jadechoghari: Stay tuned with @NVIDIARobotics folks, we’re expanding @LeRobotHF’s sim capabilities! I can train & teleop my SO-101 from real → sim, drop custom assets, and collect data from home (or HF office). finally making progress on the robotics dataset problem. Project launches soon 👀
RT @jadechoghari: Stay tuned with @NVIDIARobotics folks, we’re expanding @LeRobotHF’s sim capabilities! I can train & teleop my SO-101 from real → sim, drop custom assets, and collect data from home (or HF office). finally making progress on the robotics dataset problem. Project launches soon 👀
Hugging Face (Twitter)
RT @NVIDIAAIDev: This is what 3 million downloads looks like. 🥳
We owe a huge thank you to the AI community for making Llama Nemotron Nano VL 8B a favorite.
🤗 Try now on @huggingface: nvda.ws/4nWmwbV
RT @NVIDIAAIDev: This is what 3 million downloads looks like. 🥳
We owe a huge thank you to the AI community for making Llama Nemotron Nano VL 8B a favorite.
🤗 Try now on @huggingface: nvda.ws/4nWmwbV
Hugging Face (Twitter)
RT @MaziyarPanahi: the top two trending models on @huggingface are both for OCR!
document processing is a hot topic, kids! 😈
RT @MaziyarPanahi: the top two trending models on @huggingface are both for OCR!
document processing is a hot topic, kids! 😈
Hugging Face (Twitter)
RT @natolambert: Another roundup of the latest models with @xeophon_ !
Fun parts:
1. Methods for accurately monitoring HF 🤗downloads
2. GPT-OSS is mostly fixed and loved now
3. The perils of hybrid reasoning models
4. The continued degradation of open datasets
& usual surprises from China
RT @natolambert: Another roundup of the latest models with @xeophon_ !
Fun parts:
1. Methods for accurately monitoring HF 🤗downloads
2. GPT-OSS is mostly fixed and loved now
3. The perils of hybrid reasoning models
4. The continued degradation of open datasets
& usual surprises from China
vxTwitter / fixvx
💖 7 🔁 1
💖 7 🔁 1
Interconnects (@interconnectsai)
Latest open models (#15): It’s Qwen's world and we get to live in it, on CAISI's report, & GPT-OSS update
After a quiet month, Qwen is back in full force.
https://www.interconnects.ai/p/latest-open-models-15-its-qwens-world
After a quiet month, Qwen is back in full force.
https://www.interconnects.ai/p/latest-open-models-15-its-qwens-world