Hugging Face
83 subscribers
760 photos
262 videos
1.3K links
Download Telegram
Hugging Face (Twitter)

RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark

- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

Our 𝒻𝓇ℯℯ new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m
Hugging Face (Twitter)

RT @ClementDelangue: Super excited to bring hundreds of state-of-the-art open models (Kimi K2, Qwen3 Next, gpt-oss, Aya, GLM 4.5, Deepseek 3.1, Hermes 4, and dozens new ones every day) directly into @code & @Copilot, thanks to @huggingface inference providers!

This is powered by our amazing partners @CerebrasSystems, @FireworksAI_HQ, @Cohere_Labs, @GroqInc, @novita_labs, @togethercompute, and others who make this possible. 💪

Here’s why this is different than other APIs:
🧠 Open weights - models you can truly own, so they’ll never get nerfed or taken away from you
Multiple providers - automatically routing to get you the best speed, latency, and reliability
💸 Fair pricing - competitive rates with generous free tiers to experiment and build
🔁 Seamless switching - swap models on the fly without touching your code
🧩 Full transparency - know exactly what’s running and customize it however you want

The future of AI copilots is open and this is a big first step! 🚀
Hugging Face (Twitter)

RT @_akhaliq: Qwen3-Next-80B-A3B is out

80B params, but only 3B activated per token → 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!)

Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship.

Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking

both now available in anycoder for vibe coding
Hugging Face (Twitter)

RT @reach_vb: You DO NOT want to miss this - All the tricks and optimisations used to make gpt-oss blazingly fast, all of it - in a blogpost (with benchmarks)! 🔥

We cover details ranging from MXFP4 quantisation to, pre-built kernels, Tensor/ Expert Parallelism, Continuous Batching and much more

Bonus: We add extensive benchmarks (along with reproducible scripts)!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: BOOM! Starting today you can use open source frontier LLMs in @code with HF Inference Providers! 🔥

Use your inference credits on SoTA llms like GLM 4.5, Qwen3 Coder, DeepSeek 3.1 and more

All of it packaged in one simple extension - try it out today 🤗
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @hanouticelina: Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on @code! 🔥

which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners - @CerebrasSystems, @Cohere_Labs, @FireworksAI_HQ, @GroqInc, @novita_labs, @togethercompute & more!

give it a try today! 🧵👇
Hugging Face (Twitter)

RT @art_zucker: 🚀 Big news: we’re moving towards the v5 release of transformers!

After months of teasing, it’s finally happening 🎉

What to expect in v5:
Cutting-edge stack — fast models, with fast kernels
Smarter defaults — better out-of-the-box experience
Cleaner codebase — warnings & legacy bits removed

The goal? To make transformers the most robust, modern, and developer-friendly ML library out there.

Stay tuned — it’s going to be huge. 🔥
Hugging Face (Twitter)

RT @LucSGeorges: we've been pushing commits to transformers discretely, time to talk about we've been cooking the last few months:

⚡️ Continuous Batching is in transformers ⚡️

this will simplify, most notably, evaluation and your training loop: no need for extra dependencies or infra to get fast inference, and no need for convoluted code to update your weights

note that speed is currently not on par with the best inference frameworks and servers out there and probably never will be

the goal is *not* to become as fast: we want to complement the existing landscape with features like these, aiming for transformers to be the toolbox for tinkering with and building models
Hugging Face (Twitter)

RT @laurentsifre: We’ve been cooking this summer: Holo1.5 is here! SOTA UI localization + QA, 3× gains vs Qwen-2.5 VL 🍳
Now up to 72B 💥 — a strong base for computer-use agents like Surfer.
• Open weights on HuggingFace 🤗 https://huggingface.co/Hcompany/Holo1.5-7B
• Blog post 📝 hcompany.ai/blog/holo-1-5
(1/n 🧵)
Hugging Face (Twitter)

RT @reach_vb: Talking about the state of Open Source LLMs at @aiDotEngineer next week! 🔥

Quite excited for the talk and meeting everyone - let's goo! 🤗
Hugging Face (Twitter)

RT @Ali_TongyiLab: 1/7 We're launching Tongyi DeepResearch, the first fully open-source Web Agent to achieve performance on par with OpenAI's Deep Research with only 30B (Activated 3B) parameters! Tongyi DeepResearch agent demonstrates state-of-the-art results, scoring 32.9 on Humanity's Last Exam, 45.3 on BrowseComp, and 75.0 on the xbench-DeepSearch benchmark.
Hugging Face (Twitter)

RT @nathanhabib1011: 🚀 Just updated lighteval’s readme—can’t believe we’ve grown to cover ~7,000 tasks 😳

with top-tier multilingual support 🌍
llm as judge 🤖
multiturn evals 🗣️
coding benchmarks 🧑‍💻
Hugging Face (Twitter)

RT @MaziyarPanahi: Introducing 90+ open-source, state‑of‑the‑art biomedical and clinical zero‑shot NER models on @HuggingFace by @OpenMed_AI

Apache‑2.0 licensed and ready to use

Built on GLiNER and covering 12+ biomedical datasets

🧵 (1/6)
Hugging Face (Twitter)

RT @_fracapuano: We're releasing an updated dataset format for @LeRobotHF, and it is built for scale. LeRobotDataset:v3 supports multi-million episode datasets and streaming, enabling better performance across the board

Learn more:
Hugging Face (Twitter)

RT @Weyaxi: The @huggingface followers leaderboard is BACK after a month 🚀

Top users:

1. @TheBlokeAI
2. @lvminzhang
3. @mervenoyann
4. @bartowski1182
5. @akhaliq
6. @ylecun
7. @fffiloni
8. @xenovacom
9. @Teknium1
10. @maximelabonne
11. @TheEricHartford

Fixed thanks to @charlesbben 🙌
Hugging Face (Twitter)

RT @mshuaibii: Excited to present the FAIR Chemistry Leaderboard - a centralized space for our team’s community benchmark efforts. We’re kicking things off today with the OMol25 leaderboard!

📊Leaderboard: https://huggingface.co/spaces/facebook/fairchem_leaderboard
🖥️Code: https://github.com/facebookresearch/fairchem
Hugging Face (Twitter)

RT @mervenoyann: after last time I have asked this question, we listened to you and shipped a ton of this so there it goes:

what can we improve to make it easier to build with @huggingface Hub and open-source libraries?

your opinions matter a ton!
Hugging Face (Twitter)

RT @Tu7uruu: 🚀 New dataset drop for speech & NLP folks!

OleSpeech-IV-2025-EN-AR-100 (100h)

🎤 Real, unprompted English convos
🗂️ Human transcripts + speaker turns
🔎 Overlaps & timestamps included
📂 Raw, uncompressed audio

Perfect for ASR, diarization & convo modeling 👌