This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @TencentHunyuan: We've heard the community! 📣📣📣
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
🔹Dynamic FP8 Quantization: We’ve cut VRAM requirements by 35%—from 26GB to under 17GB—making it easy to run on consumer GPUs without compromising performance.
🔹SageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
🔹Cache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
Перейти на оригинальный пост
RT @TencentHunyuan: We've heard the community! 📣📣📣
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
🔹Dynamic FP8 Quantization: We’ve cut VRAM requirements by 35%—from 26GB to under 17GB—making it easy to run on consumer GPUs without compromising performance.
🔹SageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
🔹Cache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! 🤯
🤏 Highly compact & efficient
🤖 Strong instruction-following capabilities
🔧 Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! 🤗
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! 🤯
🤏 Highly compact & efficient
🤖 Strong instruction-following capabilities
🔧 Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! 🤗
Hugging Face (Twitter)
RT @QGallouedec: 🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️
This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇
More here 👉 hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 🔥
RT @QGallouedec: 🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️
This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇
More here 👉 hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 🔥
Hugging Face (Twitter)
RT @Tu7uruu: 🚀 Big update: Open ASR goes multilingual!
We’re kicking off with 🇩🇪🇫🇷🇮🇹🇪🇸🇵🇹 — German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so we’re exploring new languages 🌍
More languages coming soon... Which one should we add next?
RT @Tu7uruu: 🚀 Big update: Open ASR goes multilingual!
We’re kicking off with 🇩🇪🇫🇷🇮🇹🇪🇸🇵🇹 — German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so we’re exploring new languages 🌍
More languages coming soon... Which one should we add next?
Hugging Face (Twitter)
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source 🚀 We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source 🚀 We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jetbrains: We didn’t just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there 👌
Find out more about Mellum here: jb.gg/mbz8bq
RT @jetbrains: We didn’t just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there 👌
Find out more about Mellum here: jb.gg/mbz8bq
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @mervenoyann: how does DINOv3 perceive objects? 👀
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects 🤗
live on @huggingface Spaces
RT @mervenoyann: how does DINOv3 perceive objects? 👀
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects 🤗
live on @huggingface Spaces
Hugging Face (Twitter)
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source 🔥
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed 💥
Available on Hugging Face, go check them out today!
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source 🔥
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed 💥
Available on Hugging Face, go check them out today!