Hugging Face (Twitter)
RT @MaziyarPanahi: need your help! list your top 5 datasets on @huggingface for rl training with verified answers.
- math
- code
- everyday stuff
RT @MaziyarPanahi: need your help! list your top 5 datasets on @huggingface for rl training with verified answers.
- math
- code
- everyday stuff
Hugging Face (Twitter)
RT @MaziyarPanahi: 1/ shipping two synthetic med qa sets from @OpenMed_AI community, made by @mkurman88 (core contributor):
• med-synth qwen3-235b-a22b (2507)
• med-synth gemma 3 (27b-it)
datasets on @huggingface 👇
RT @MaziyarPanahi: 1/ shipping two synthetic med qa sets from @OpenMed_AI community, made by @mkurman88 (core contributor):
• med-synth qwen3-235b-a22b (2507)
• med-synth gemma 3 (27b-it)
datasets on @huggingface 👇
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: BOOM! Microsoft just released an upgraded VibeVoice Large ~10B Text to Speech model - MIT licensed 🔥
> Generate multi-speaker podcasts in minutes ⚡
> Works blazingly fast on ZeroGPU with H200 (FREE)
Try it out today! https://twitter.com/reach_vb/status/1960064616278417826#m
RT @reach_vb: BOOM! Microsoft just released an upgraded VibeVoice Large ~10B Text to Speech model - MIT licensed 🔥
> Generate multi-speaker podcasts in minutes ⚡
> Works blazingly fast on ZeroGPU with H200 (FREE)
Try it out today! https://twitter.com/reach_vb/status/1960064616278417826#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: If you think @Apple is not doing much in AI, you're getting blindsided by the chatbot hype and not paying enough attention!
They just released FastVLM and MobileCLIP2 on @huggingface. The models are up to 85x faster and 3.4x smaller than previous work, enabling real-time vision language model (VLM) applications! It can even do live video captioning 100% locally in your browser 🤯🤯🤯
RT @ClementDelangue: If you think @Apple is not doing much in AI, you're getting blindsided by the chatbot hype and not paying enough attention!
They just released FastVLM and MobileCLIP2 on @huggingface. The models are up to 85x faster and 3.4x smaller than previous work, enabling real-time vision language model (VLM) applications! It can even do live video captioning 100% locally in your browser 🤯🤯🤯
Hugging Face (Twitter)
RT @eliebakouch: Super excited to announce that our research team at @huggingface will be doing an AMA on r/LocalLLaMA.
Come ask any questions to the team behind SmolLM, FineWeb and more! And who knows, maybe there’ll be a shiny new release to talk about?
Thursday 4th September, 8AM-11AM PST 🤗
RT @eliebakouch: Super excited to announce that our research team at @huggingface will be doing an AMA on r/LocalLLaMA.
Come ask any questions to the team behind SmolLM, FineWeb and more! And who knows, maybe there’ll be a shiny new release to talk about?
Thursday 4th September, 8AM-11AM PST 🤗
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: 🎬 One prompt → a full video
GPT-5 + open models, stitched together with @OpenAI Codex + HF MCP Server 🤯
RT @reach_vb: 🎬 One prompt → a full video
GPT-5 + open models, stitched together with @OpenAI Codex + HF MCP Server 🤯
Hugging Face (Twitter)
RT @RisingSayak: ZeroGPU on 🤗 HF Spaces enables anyone to build delightful ML demos, benefitting from powerful compute. But, due to its serverless nature, it is hard to optimize these demos.
That CHANGES today 🪖
Use AoT compilation to melt our ZeroGPU servers 🔥
Details ⬇️
RT @RisingSayak: ZeroGPU on 🤗 HF Spaces enables anyone to build delightful ML demos, benefitting from powerful compute. But, due to its serverless nature, it is hard to optimize these demos.
That CHANGES today 🪖
Use AoT compilation to melt our ZeroGPU servers 🔥
Details ⬇️
Hugging Face (Twitter)
RT @LoubnaBenAllal1: Our science team at @huggingface will be doing an AMA on r/LocalLLaMA tomorrow at 8AM PST (5PM CET). The team members behind SmolLM, SmolVLM, FineWeb, and more will be present to answer all your questions!
RT @LoubnaBenAllal1: Our science team at @huggingface will be doing an AMA on r/LocalLLaMA tomorrow at 8AM PST (5PM CET). The team members behind SmolLM, SmolVLM, FineWeb, and more will be present to answer all your questions!
Hugging Face (Twitter)
RT @Xianbao_QIAN: I'm very glad to see that the new translation model from @TencentHunyuan is now ranking the 3rd. It's a reminder that small domain tuned models are more valuable than they appears.
Agentic stack needs both large and small models. Large models can handle planning and leverage sub-agents based on lean models to perform a particular task. Small models are cheap, fast and fine-tunable. They're not the opposite of large models but the complement to it.
RT @Xianbao_QIAN: I'm very glad to see that the new translation model from @TencentHunyuan is now ranking the 3rd. It's a reminder that small domain tuned models are more valuable than they appears.
Agentic stack needs both large and small models. Large models can handle planning and leverage sub-agents based on lean models to perform a particular task. Small models are cheap, fast and fine-tunable. They're not the opposite of large models but the complement to it.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @multimodalart: we hacked Wan 2.2 and discovered that it does first and last frame filling, works out of the box on 🧨 diffusers
i've built an app for it on @huggingface Spaces (which is powering powering our nano banana video mode too 🍌 🎬)
RT @multimodalart: we hacked Wan 2.2 and discovered that it does first and last frame filling, works out of the box on 🧨 diffusers
i've built an app for it on @huggingface Spaces (which is powering powering our nano banana video mode too 🍌 🎬)
Hugging Face (Twitter)
RT @QGallouedec: sept 4
8-11 am pst
@huggingface science team AMA
reddit r/LocalLlama
👽
RT @QGallouedec: sept 4
8-11 am pst
@huggingface science team AMA
reddit r/LocalLlama
👽
Hugging Face (Twitter)
RT @moby763canary21: I'm really glad that people are using my @huggingface model. It's really cool to contribute to Open ML!
#ai #machinelearning #huggingface @ClementDelangue
RT @moby763canary21: I'm really glad that people are using my @huggingface model. It's really cool to contribute to Open ML!
#ai #machinelearning #huggingface @ClementDelangue
Hugging Face (Twitter)
RT @lhoestq: "we made uploads to @huggingface using @ApacheSpark much faster than to any other cloud storage"
Spark is faster with Xet on Hugging Face for editing & publishing AI datasets 🔥
I explained how it works here👇
PS: it's 🤯
PS2: thumb up and sub👍🙏🤗🤗🤗
https://www.youtube.com/watch?v=vmwxVfye8fA?si=hp6Z3a28N0-bmZHF&t=2179
RT @lhoestq: "we made uploads to @huggingface using @ApacheSpark much faster than to any other cloud storage"
Spark is faster with Xet on Hugging Face for editing & publishing AI datasets 🔥
I explained how it works here👇
PS: it's 🤯
PS2: thumb up and sub👍🙏🤗🤗🤗
https://www.youtube.com/watch?v=vmwxVfye8fA?si=hp6Z3a28N0-bmZHF&t=2179
Hugging Face (Twitter)
RT @lvwerra: The Hugging Face research team is doing an AMA on r/LocalLlaMa tomorrow! 🚀
Join if you are interested in:
> How did we get into the field? We cover a broad range of backgrounds and paths!
> How can you do impactful things while being more limited in resources than other labs?
> How do we decide which projects to work on when so many things are exciting?
> How does a fully remote team in a high velocity field even work?
> What's the most exciting thing coming in the next few months?
> What's your favourite optimizer and why is it Adam?
> How does Hugging Face make money?🤫
Or whatever else you want to ask - it's an AMA!
RT @lvwerra: The Hugging Face research team is doing an AMA on r/LocalLlaMa tomorrow! 🚀
Join if you are interested in:
> How did we get into the field? We cover a broad range of backgrounds and paths!
> How can you do impactful things while being more limited in resources than other labs?
> How do we decide which projects to work on when so many things are exciting?
> How does a fully remote team in a high velocity field even work?
> What's the most exciting thing coming in the next few months?
> What's your favourite optimizer and why is it Adam?
> How does Hugging Face make money?🤫
Or whatever else you want to ask - it's an AMA!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @victormustar: Wan 2.2: First frame → Last frame: Upload both as images to get excellent results.
Amazing what open-source AI video can do now 😍
⬇️ Demo available on Hugging Face
RT @victormustar: Wan 2.2: First frame → Last frame: Upload both as images to get excellent results.
Amazing what open-source AI video can do now 😍
⬇️ Demo available on Hugging Face
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: HunyuanWorld-Voyager - Explorable 3D World Generation
📹 World-consistent video diffusion
🌎 Long-range world exploration
⚙️ Scalable data engine
available on Hugging Face
RT @dylan_ebert_: HunyuanWorld-Voyager - Explorable 3D World Generation
📹 World-consistent video diffusion
🌎 Long-range world exploration
⚙️ Scalable data engine
available on Hugging Face
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LeRobotHF: 🤗 New arrivals at Hugging Face LeRobot! 🤗
We just got two fresh Unitree robots 🤖🐕, which means more robots will be added to the library 👀!
👉 Which additions would you like to see in LeRobot?
RT @LeRobotHF: 🤗 New arrivals at Hugging Face LeRobot! 🤗
We just got two fresh Unitree robots 🤖🐕, which means more robots will be added to the library 👀!
👉 Which additions would you like to see in LeRobot?
Hugging Face (Twitter)
RT @natolambert: Pretty big vibe shift coming from a predominantly AI Safety oriented org to say "gatekeeping access to general-purpose technology is not a sustainable or proportionate response to low-confidence evidence of serious risk."
Pretty much my point for a few years.
RT @natolambert: Pretty big vibe shift coming from a predominantly AI Safety oriented org to say "gatekeeping access to general-purpose technology is not a sustainable or proportionate response to low-confidence evidence of serious risk."
Pretty much my point for a few years.
vxTwitter / fixvx • See original tweet for full article
AI Frontiers (@aif_media)
Precaution Shouldn't Keep Open-Source AI Behind the Frontier
Invoking speculative risks to keep our most capable models behind paywalls could create a new form of digital feudalism.
Ben Brooks — August 31, 2025
This article originally appeared in AI Frontiers.…
Invoking speculative risks to keep our most capable models behind paywalls could create a new form of digital feudalism.
Ben Brooks — August 31, 2025
This article originally appeared in AI Frontiers.…
Hugging Face (Twitter)
RT @RisingSayak: You can now use flash-attention 3 through 🤗 `kernels`, skipping its long build times entirely 🔥
Comes with full `torch.compile` support with fullgraph traceability.
Time to melt those hoppers!
RT @RisingSayak: You can now use flash-attention 3 through 🤗 `kernels`, skipping its long build times entirely 🔥
Comes with full `torch.compile` support with fullgraph traceability.
Time to melt those hoppers!