Hugging Face
73 subscribers
747 photos
254 videos
1.27K links
Download Telegram
Hugging Face (Twitter)

RT @MaziyarPanahi: need your help! list your top 5 datasets on @huggingface for rl training with verified answers.

- math
- code
- everyday stuff
Hugging Face (Twitter)

RT @MaziyarPanahi: 1/ shipping two synthetic med qa sets from @OpenMed_AI community, made by @mkurman88 (core contributor):

• med-synth qwen3-235b-a22b (2507)
• med-synth gemma 3 (27b-it)

datasets on @huggingface 👇
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: BOOM! Microsoft just released an upgraded VibeVoice Large ~10B Text to Speech model - MIT licensed 🔥

> Generate multi-speaker podcasts in minutes
> Works blazingly fast on ZeroGPU with H200 (FREE)

Try it out today! https://twitter.com/reach_vb/status/1960064616278417826#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @ClementDelangue: If you think @Apple is not doing much in AI, you're getting blindsided by the chatbot hype and not paying enough attention!

They just released FastVLM and MobileCLIP2 on @huggingface. The models are up to 85x faster and 3.4x smaller than previous work, enabling real-time vision language model (VLM) applications! It can even do live video captioning 100% locally in your browser 🤯🤯🤯
Hugging Face (Twitter)

RT @eliebakouch: Super excited to announce that our research team at @huggingface will be doing an AMA on r/LocalLLaMA.

Come ask any questions to the team behind SmolLM, FineWeb and more! And who knows, maybe there’ll be a shiny new release to talk about?

Thursday 4th September, 8AM-11AM PST 🤗
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: 🎬 One prompt → a full video

GPT-5 + open models, stitched together with @OpenAI Codex + HF MCP Server 🤯
Hugging Face (Twitter)

RT @RisingSayak: ZeroGPU on 🤗 HF Spaces enables anyone to build delightful ML demos, benefitting from powerful compute. But, due to its serverless nature, it is hard to optimize these demos.

That CHANGES today 🪖

Use AoT compilation to melt our ZeroGPU servers 🔥

Details ⬇️
Hugging Face (Twitter)

RT @LoubnaBenAllal1: Our science team at @huggingface will be doing an AMA on r/LocalLLaMA tomorrow at 8AM PST (5PM CET). The team members behind SmolLM, SmolVLM, FineWeb, and more will be present to answer all your questions!
Hugging Face (Twitter)

RT @Xianbao_QIAN: I'm very glad to see that the new translation model from @TencentHunyuan is now ranking the 3rd. It's a reminder that small domain tuned models are more valuable than they appears.

Agentic stack needs both large and small models. Large models can handle planning and leverage sub-agents based on lean models to perform a particular task. Small models are cheap, fast and fine-tunable. They're not the opposite of large models but the complement to it.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @multimodalart: we hacked Wan 2.2 and discovered that it does first and last frame filling, works out of the box on 🧨 diffusers

i've built an app for it on @huggingface Spaces (which is powering powering our nano banana video mode too 🍌 🎬)
Hugging Face (Twitter)

RT @QGallouedec: sept 4
8-11 am pst
@huggingface science team AMA
reddit r/LocalLlama
👽
Hugging Face (Twitter)

RT @moby763canary21: I'm really glad that people are using my @huggingface model. It's really cool to contribute to Open ML!

#ai #machinelearning #huggingface @ClementDelangue
Hugging Face (Twitter)

RT @lhoestq: "we made uploads to @huggingface using @ApacheSpark much faster than to any other cloud storage"

Spark is faster with Xet on Hugging Face for editing & publishing AI datasets 🔥

I explained how it works here👇

PS: it's 🤯
PS2: thumb up and sub👍🙏🤗🤗🤗
https://www.youtube.com/watch?v=vmwxVfye8fA?si=hp6Z3a28N0-bmZHF&t=2179
Hugging Face (Twitter)

RT @lvwerra: The Hugging Face research team is doing an AMA on r/LocalLlaMa tomorrow! 🚀

Join if you are interested in:

> How did we get into the field? We cover a broad range of backgrounds and paths!
> How can you do impactful things while being more limited in resources than other labs?
> How do we decide which projects to work on when so many things are exciting?
> How does a fully remote team in a high velocity field even work?
> What's the most exciting thing coming in the next few months?
> What's your favourite optimizer and why is it Adam?
> How does Hugging Face make money?🤫

Or whatever else you want to ask - it's an AMA!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @victormustar: Wan 2.2: First frame → Last frame: Upload both as images to get excellent results.

Amazing what open-source AI video can do now 😍

⬇️ Demo available on Hugging Face
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @dylan_ebert_: HunyuanWorld-Voyager - Explorable 3D World Generation

📹 World-consistent video diffusion
🌎 Long-range world exploration
⚙️ Scalable data engine

available on Hugging Face
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @LeRobotHF: 🤗 New arrivals at Hugging Face LeRobot! 🤗

We just got two fresh Unitree robots 🤖🐕, which means more robots will be added to the library 👀!

👉 Which additions would you like to see in LeRobot?
Hugging Face (Twitter)

RT @RisingSayak: You can now use flash-attention 3 through 🤗 `kernels`, skipping its long build times entirely 🔥

Comes with full `torch.compile` support with fullgraph traceability.

Time to melt those hoppers!