Hugging Face
83 subscribers
760 photos
262 videos
1.3K links
Download Telegram
Hugging Face (Twitter)

RT @Xianbao_QIAN: The new @TencentHunyuan image 2.1 model is really cool.

It reminds me of @Zai_org GLM 4.1. I love how these researchers being humble and calling great improvement 0.1

Both model & demo released on @huggingface
Hugging Face (Twitter)

RT @tomaarsen: ModernBERT goes MULTILINGUAL!

One of the most requested models I've seen, @jhuclsp has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT.

Stronger than an existing models at their sizes, while also much faster!

Details in 🧡
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @adrgrondin: I gave SmolLM3 by @huggingface a voice πŸ—£οΈ

Here’s a demo of me talking with the model hands-free on iPhone, thanks to built-in voice activity detection

Everything runs fully on-device, powered by Apple MLX
Hugging Face (Twitter)

RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark

- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

Our 𝒻𝓇ℯℯ new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m
Hugging Face (Twitter)

RT @ClementDelangue: Super excited to bring hundreds of state-of-the-art open models (Kimi K2, Qwen3 Next, gpt-oss, Aya, GLM 4.5, Deepseek 3.1, Hermes 4, and dozens new ones every day) directly into @code & @Copilot, thanks to @huggingface inference providers!

This is powered by our amazing partners @CerebrasSystems, @FireworksAI_HQ, @Cohere_Labs, @GroqInc, @novita_labs, @togethercompute, and others who make this possible. πŸ’ͺ

Here’s why this is different than other APIs:
🧠 Open weights - models you can truly own, so they’ll never get nerfed or taken away from you
⚑ Multiple providers - automatically routing to get you the best speed, latency, and reliability
πŸ’Έ Fair pricing - competitive rates with generous free tiers to experiment and build
πŸ” Seamless switching - swap models on the fly without touching your code
🧩 Full transparency - know exactly what’s running and customize it however you want

The future of AI copilots is open and this is a big first step! πŸš€
Hugging Face (Twitter)

RT @_akhaliq: Qwen3-Next-80B-A3B is out

80B params, but only 3B activated per token β†’ 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!)

Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship.

Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking

both now available in anycoder for vibe coding
Hugging Face (Twitter)

RT @reach_vb: You DO NOT want to miss this - All the tricks and optimisations used to make gpt-oss blazingly fast, all of it - in a blogpost (with benchmarks)! πŸ”₯

We cover details ranging from MXFP4 quantisation to, pre-built kernels, Tensor/ Expert Parallelism, Continuous Batching and much more

Bonus: We add extensive benchmarks (along with reproducible scripts)! ⚑
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: BOOM! Starting today you can use open source frontier LLMs in @code with HF Inference Providers! πŸ”₯

Use your inference credits on SoTA llms like GLM 4.5, Qwen3 Coder, DeepSeek 3.1 and more

All of it packaged in one simple extension - try it out today πŸ€—
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @hanouticelina: Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on @code! πŸ”₯

which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners - @CerebrasSystems, @Cohere_Labs, @FireworksAI_HQ, @GroqInc, @novita_labs, @togethercompute & more!

give it a try today! πŸ§΅πŸ‘‡
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @GroqInc: You can now access Groq models directly in VS @code with @huggingface.

Just BYOK. πŸ”‘
Hugging Face (Twitter)

RT @art_zucker: πŸš€ Big news: we’re moving towards the v5 release of transformers!

After months of teasing, it’s finally happening πŸŽ‰

What to expect in v5:
✨ Cutting-edge stack β€” fast models, with fast kernels
✨ Smarter defaults β€” better out-of-the-box experience
✨ Cleaner codebase β€” warnings & legacy bits removed

The goal? To make transformers the most robust, modern, and developer-friendly ML library out there.

Stay tuned β€” it’s going to be huge. πŸ”₯
Hugging Face (Twitter)

RT @LucSGeorges: we've been pushing commits to transformers discretely, time to talk about we've been cooking the last few months:

⚑️ Continuous Batching is in transformers ⚑️

this will simplify, most notably, evaluation and your training loop: no need for extra dependencies or infra to get fast inference, and no need for convoluted code to update your weights

note that speed is currently not on par with the best inference frameworks and servers out there and probably never will be

the goal is *not* to become as fast: we want to complement the existing landscape with features like these, aiming for transformers to be the toolbox for tinkering with and building models
Hugging Face (Twitter)

RT @laurentsifre: We’ve been cooking this summer: Holo1.5 is here! SOTA UI localization + QA, 3Γ— gains vs Qwen-2.5 VL 🍳
Now up to 72B πŸ’₯ β€” a strong base for computer-use agents like Surfer.
β€’ Open weights on HuggingFace πŸ€— https://huggingface.co/Hcompany/Holo1.5-7B
β€’ Blog post πŸ“ hcompany.ai/blog/holo-1-5
(1/n 🧡)
Hugging Face (Twitter)

RT @reach_vb: Talking about the state of Open Source LLMs at @aiDotEngineer next week! πŸ”₯

Quite excited for the talk and meeting everyone - let's goo! πŸ€—