Hugging Face (Twitter)
RT @Xianbao_QIAN: The new @TencentHunyuan image 2.1 model is really cool.
It reminds me of @Zai_org GLM 4.1. I love how these researchers being humble and calling great improvement 0.1
Both model & demo released on @huggingface
RT @Xianbao_QIAN: The new @TencentHunyuan image 2.1 model is really cool.
It reminds me of @Zai_org GLM 4.1. I love how these researchers being humble and calling great improvement 0.1
Both model & demo released on @huggingface
Hugging Face (Twitter)
RT @tomaarsen: ModernBERT goes MULTILINGUAL!
One of the most requested models I've seen, @jhuclsp has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT.
Stronger than an existing models at their sizes, while also much faster!
Details in π§΅
RT @tomaarsen: ModernBERT goes MULTILINGUAL!
One of the most requested models I've seen, @jhuclsp has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT.
Stronger than an existing models at their sizes, while also much faster!
Details in π§΅
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @adrgrondin: I gave SmolLM3 by @huggingface a voice π£οΈ
Hereβs a demo of me talking with the model hands-free on iPhone, thanks to built-in voice activity detection
Everything runs fully on-device, powered by Apple MLX
RT @adrgrondin: I gave SmolLM3 by @huggingface a voice π£οΈ
Hereβs a demo of me talking with the model hands-free on iPhone, thanks to built-in voice activity detection
Everything runs fully on-device, powered by Apple MLX
βHugging Face (Twitter)
RT @daftengine: aaaaand we're live on @huggingface documentation! Thank you to @lhoestq, @vanstriendaniel and the Hugging Face team for all their help pushing this through and excited for our continued collaboration!
na2.hubs.ly/H010TDt0
#Daft #HuggingFace #Multimodal #OpenSource
RT @daftengine: aaaaand we're live on @huggingface documentation! Thank you to @lhoestq, @vanstriendaniel and the Hugging Face team for all their help pushing this through and excited for our continued collaboration!
na2.hubs.ly/H010TDt0
#Daft #HuggingFace #Multimodal #OpenSource
huggingface.co
Daft
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark
- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark
- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
Our π»πβ―β― new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m
Our π»πβ―β― new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m
Hugging Face (Twitter)
RT @ClementDelangue: Super excited to bring hundreds of state-of-the-art open models (Kimi K2, Qwen3 Next, gpt-oss, Aya, GLM 4.5, Deepseek 3.1, Hermes 4, and dozens new ones every day) directly into @code & @Copilot, thanks to @huggingface inference providers!
This is powered by our amazing partners @CerebrasSystems, @FireworksAI_HQ, @Cohere_Labs, @GroqInc, @novita_labs, @togethercompute, and others who make this possible. πͺ
Hereβs why this is different than other APIs:
π§ Open weights - models you can truly own, so theyβll never get nerfed or taken away from you
β‘ Multiple providers - automatically routing to get you the best speed, latency, and reliability
πΈ Fair pricing - competitive rates with generous free tiers to experiment and build
π Seamless switching - swap models on the fly without touching your code
π§© Full transparency - know exactly whatβs running and customize it however you want
The future of AI copilots is open and this is a big first step! π
RT @ClementDelangue: Super excited to bring hundreds of state-of-the-art open models (Kimi K2, Qwen3 Next, gpt-oss, Aya, GLM 4.5, Deepseek 3.1, Hermes 4, and dozens new ones every day) directly into @code & @Copilot, thanks to @huggingface inference providers!
This is powered by our amazing partners @CerebrasSystems, @FireworksAI_HQ, @Cohere_Labs, @GroqInc, @novita_labs, @togethercompute, and others who make this possible. πͺ
Hereβs why this is different than other APIs:
π§ Open weights - models you can truly own, so theyβll never get nerfed or taken away from you
β‘ Multiple providers - automatically routing to get you the best speed, latency, and reliability
πΈ Fair pricing - competitive rates with generous free tiers to experiment and build
π Seamless switching - swap models on the fly without touching your code
π§© Full transparency - know exactly whatβs running and customize it however you want
The future of AI copilots is open and this is a big first step! π
Hugging Face (Twitter)
RT @_akhaliq: Qwen3-Next-80B-A3B is out
80B params, but only 3B activated per token β 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!)
Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship.
Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking
both now available in anycoder for vibe coding
RT @_akhaliq: Qwen3-Next-80B-A3B is out
80B params, but only 3B activated per token β 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!)
Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship.
Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking
both now available in anycoder for vibe coding
Hugging Face (Twitter)
RT @reach_vb: You DO NOT want to miss this - All the tricks and optimisations used to make gpt-oss blazingly fast, all of it - in a blogpost (with benchmarks)! π₯
We cover details ranging from MXFP4 quantisation to, pre-built kernels, Tensor/ Expert Parallelism, Continuous Batching and much more
Bonus: We add extensive benchmarks (along with reproducible scripts)! β‘
RT @reach_vb: You DO NOT want to miss this - All the tricks and optimisations used to make gpt-oss blazingly fast, all of it - in a blogpost (with benchmarks)! π₯
We cover details ranging from MXFP4 quantisation to, pre-built kernels, Tensor/ Expert Parallelism, Continuous Batching and much more
Bonus: We add extensive benchmarks (along with reproducible scripts)! β‘
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: BOOM! Starting today you can use open source frontier LLMs in @code with HF Inference Providers! π₯
Use your inference credits on SoTA llms like GLM 4.5, Qwen3 Coder, DeepSeek 3.1 and more
All of it packaged in one simple extension - try it out today π€
RT @reach_vb: BOOM! Starting today you can use open source frontier LLMs in @code with HF Inference Providers! π₯
Use your inference credits on SoTA llms like GLM 4.5, Qwen3 Coder, DeepSeek 3.1 and more
All of it packaged in one simple extension - try it out today π€
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @hanouticelina: Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on @code! π₯
which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners - @CerebrasSystems, @Cohere_Labs, @FireworksAI_HQ, @GroqInc, @novita_labs, @togethercompute & more!
give it a try today! π§΅π
RT @hanouticelina: Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on @code! π₯
which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners - @CerebrasSystems, @Cohere_Labs, @FireworksAI_HQ, @GroqInc, @novita_labs, @togethercompute & more!
give it a try today! π§΅π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @GroqInc: You can now access Groq models directly in VS @code with @huggingface.
Just BYOK. π
RT @GroqInc: You can now access Groq models directly in VS @code with @huggingface.
Just BYOK. π
Hugging Face (Twitter)
RT @art_zucker: π Big news: weβre moving towards the v5 release of transformers!
After months of teasing, itβs finally happening π
What to expect in v5:
β¨ Cutting-edge stack β fast models, with fast kernels
β¨ Smarter defaults β better out-of-the-box experience
β¨ Cleaner codebase β warnings & legacy bits removed
The goal? To make transformers the most robust, modern, and developer-friendly ML library out there.
Stay tuned β itβs going to be huge. π₯
RT @art_zucker: π Big news: weβre moving towards the v5 release of transformers!
After months of teasing, itβs finally happening π
What to expect in v5:
β¨ Cutting-edge stack β fast models, with fast kernels
β¨ Smarter defaults β better out-of-the-box experience
β¨ Cleaner codebase β warnings & legacy bits removed
The goal? To make transformers the most robust, modern, and developer-friendly ML library out there.
Stay tuned β itβs going to be huge. π₯
Hugging Face (Twitter)
RT @LucSGeorges: we've been pushing commits to transformers discretely, time to talk about we've been cooking the last few months:
β‘οΈ Continuous Batching is in transformers β‘οΈ
this will simplify, most notably, evaluation and your training loop: no need for extra dependencies or infra to get fast inference, and no need for convoluted code to update your weights
note that speed is currently not on par with the best inference frameworks and servers out there and probably never will be
the goal is *not* to become as fast: we want to complement the existing landscape with features like these, aiming for transformers to be the toolbox for tinkering with and building models
RT @LucSGeorges: we've been pushing commits to transformers discretely, time to talk about we've been cooking the last few months:
β‘οΈ Continuous Batching is in transformers β‘οΈ
this will simplify, most notably, evaluation and your training loop: no need for extra dependencies or infra to get fast inference, and no need for convoluted code to update your weights
note that speed is currently not on par with the best inference frameworks and servers out there and probably never will be
the goal is *not* to become as fast: we want to complement the existing landscape with features like these, aiming for transformers to be the toolbox for tinkering with and building models
Hugging Face (Twitter)
RT @laurentsifre: Weβve been cooking this summer: Holo1.5 is here! SOTA UI localization + QA, 3Γ gains vs Qwen-2.5 VL π³
Now up to 72B π₯ β a strong base for computer-use agents like Surfer.
β’ Open weights on HuggingFace π€ https://huggingface.co/Hcompany/Holo1.5-7B
β’ Blog post π hcompany.ai/blog/holo-1-5
(1/n π§΅)
RT @laurentsifre: Weβve been cooking this summer: Holo1.5 is here! SOTA UI localization + QA, 3Γ gains vs Qwen-2.5 VL π³
Now up to 72B π₯ β a strong base for computer-use agents like Surfer.
β’ Open weights on HuggingFace π€ https://huggingface.co/Hcompany/Holo1.5-7B
β’ Blog post π hcompany.ai/blog/holo-1-5
(1/n π§΅)
Hugging Face (Twitter)
RT @reach_vb: Talking about the state of Open Source LLMs at @aiDotEngineer next week! π₯
Quite excited for the talk and meeting everyone - let's goo! π€
RT @reach_vb: Talking about the state of Open Source LLMs at @aiDotEngineer next week! π₯
Quite excited for the talk and meeting everyone - let's goo! π€