This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: This week, the race for world models began.
Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model
- 25 FPS live video synthesis
- Realtime inputs
- Trained on ~1200 hours of video data
๐ค available now on Hugging Face
RT @dylan_ebert_: This week, the race for world models began.
Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model
- 25 FPS live video synthesis
- Realtime inputs
- Trained on ~1200 hours of video data
๐ค available now on Hugging Face
Hugging Face (Twitter)
RT @steren: Cloud Run ๐ค @Gradio
"gcloud run deploy" on a Gradio app now just works:
RT @steren: Cloud Run ๐ค @Gradio
"gcloud run deploy" on a Gradio app now just works:
Hugging Face (Twitter)
RT @RisingSayak: If you have been terribly annoyed by the long cold-starts when using `torch.compile` on Qwen-Image, please use regional compilation!
It cuts the cold-start timing by ~2x while retaining full compilation benefits ๐ฅ
RT @RisingSayak: If you have been terribly annoyed by the long cold-starts when using `torch.compile` on Qwen-Image, please use regional compilation!
It cuts the cold-start timing by ~2x while retaining full compilation benefits ๐ฅ
โHugging Face (Twitter)
RT @osanseviero: Some fun things people may have missed from Gemma 3 270M:
1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger ๐คฏ
2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper specialized in a task or a specific language, as the model will work very well even with less common tokens.
3. We released both a pre-trained and an instruct model, enabling you to fine-tune for your needs.
4. We collaborated closely with the developer ecosystem to get this out, allowing you to use Hugging Face transformers and transformers.js, Ollama, Kaggle, LM Studio, Docker, LiteRT, Vertex, llama.cpp, Keras, MLX, Gemma.cpp, UnSloth, JAX, Cloud Run, and more.
https://huggingface.co/google/gemma-3-270m
RT @osanseviero: Some fun things people may have missed from Gemma 3 270M:
1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger ๐คฏ
2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper specialized in a task or a specific language, as the model will work very well even with less common tokens.
3. We released both a pre-trained and an instruct model, enabling you to fine-tune for your needs.
4. We collaborated closely with the developer ecosystem to get this out, allowing you to use Hugging Face transformers and transformers.js, Ollama, Kaggle, LM Studio, Docker, LiteRT, Vertex, llama.cpp, Keras, MLX, Gemma.cpp, UnSloth, JAX, Cloud Run, and more.
https://huggingface.co/google/gemma-3-270m
huggingface.co
google/gemma-3-270m ยท Hugging Face
Weโre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @TencentHunyuan: We've heard the community! ๐ฃ๐ฃ๐ฃ
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
๐นDynamic FP8 Quantization: Weโve cut VRAM requirements by 35%โfrom 26GB to under 17GBโmaking it easy to run on consumer GPUs without compromising performance.
๐นSageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
๐นCache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
ะะตัะตะนัะธ ะฝะฐ ะพัะธะณะธะฝะฐะปัะฝัะน ะฟะพัั
RT @TencentHunyuan: We've heard the community! ๐ฃ๐ฃ๐ฃ
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
๐นDynamic FP8 Quantization: Weโve cut VRAM requirements by 35%โfrom 26GB to under 17GBโmaking it easy to run on consumer GPUs without compromising performance.
๐นSageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
๐นCache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
ะะตัะตะนัะธ ะฝะฐ ะพัะธะณะธะฝะฐะปัะฝัะน ะฟะพัั
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! ๐คฏ
๐ค Highly compact & efficient
๐ค Strong instruction-following capabilities
๐ง Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! ๐ค
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! ๐คฏ
๐ค Highly compact & efficient
๐ค Strong instruction-following capabilities
๐ง Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! ๐ค
Hugging Face (Twitter)
RT @QGallouedec: ๐จ Big news! We decided that @huggingfaceโs post-training library, TRL, will natively supports training Vision Language Models ๐ผ๏ธ
This builds on our recent VLM support in SFTTrainer โ and weโre not stopping until TRL is the #1 VLM training library ๐ฅ
More here ๐ hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 ๐ฅ
RT @QGallouedec: ๐จ Big news! We decided that @huggingfaceโs post-training library, TRL, will natively supports training Vision Language Models ๐ผ๏ธ
This builds on our recent VLM support in SFTTrainer โ and weโre not stopping until TRL is the #1 VLM training library ๐ฅ
More here ๐ hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 ๐ฅ
Hugging Face (Twitter)
RT @Tu7uruu: ๐ Big update: Open ASR goes multilingual!
Weโre kicking off with ๐ฉ๐ช๐ซ๐ท๐ฎ๐น๐ช๐ธ๐ต๐น โ German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so weโre exploring new languages ๐
More languages coming soon... Which one should we add next?
RT @Tu7uruu: ๐ Big update: Open ASR goes multilingual!
Weโre kicking off with ๐ฉ๐ช๐ซ๐ท๐ฎ๐น๐ช๐ธ๐ต๐น โ German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so weโre exploring new languages ๐
More languages coming soon... Which one should we add next?
Hugging Face (Twitter)
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source ๐ We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source ๐ We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jetbrains: We didnโt just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there ๐
Find out more about Mellum here: jb.gg/mbz8bq
RT @jetbrains: We didnโt just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there ๐
Find out more about Mellum here: jb.gg/mbz8bq
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @mervenoyann: how does DINOv3 perceive objects? ๐
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects ๐ค
live on @huggingface Spaces
RT @mervenoyann: how does DINOv3 perceive objects? ๐
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects ๐ค
live on @huggingface Spaces
Hugging Face (Twitter)
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source ๐ฅ
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed ๐ฅ
Available on Hugging Face, go check them out today!
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source ๐ฅ
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed ๐ฅ
Available on Hugging Face, go check them out today!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
Hugging Face (Twitter)
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face ๐คฏ https://twitter.com/reach_vb/status/1957148807562723809#m
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face ๐คฏ https://twitter.com/reach_vb/status/1957148807562723809#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works ๐
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works ๐
โHugging Face (Twitter)
RT @arundhati1504: ๐ Introducing Granary โ a 1M-hour, open multilingual speech dataset โ plus new #opensource ASR models. ๐
.
๐ค Now on HuggingFace: nvda.ws/3Jg0BwV
๐ Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
RT @arundhati1504: ๐ Introducing Granary โ a 1M-hour, open multilingual speech dataset โ plus new #opensource ASR models. ๐
.
๐ค Now on HuggingFace: nvda.ws/3Jg0BwV
๐ Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
X (formerly Twitter)
#opensource - Search / X
See posts about #opensource on X. See what people are saying and join the conversation.