Hugging Face (Twitter)
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @TencentHunyuan: We've heard the community! 📣📣📣
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
🔹Dynamic FP8 Quantization: We’ve cut VRAM requirements by 35%—from 26GB to under 17GB—making it easy to run on consumer GPUs without compromising performance.
🔹SageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
🔹Cache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
Перейти на оригинальный пост
RT @TencentHunyuan: We've heard the community! 📣📣📣
Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.
Key Technical Optimizations:
🔹Dynamic FP8 Quantization: We’ve cut VRAM requirements by 35%—from 26GB to under 17GB—making it easy to run on consumer GPUs without compromising performance.
🔹SageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
🔹Cache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.
Now, developers can run a complex world model...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! 🤯
🤏 Highly compact & efficient
🤖 Strong instruction-following capabilities
🔧 Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! 🤗
RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! 🤯
🤏 Highly compact & efficient
🤖 Strong instruction-following capabilities
🔧 Perfect candidate for fine-tuning
It's so tiny that it can even run 100% locally in your browser with Transformers.js! 🤗
Hugging Face (Twitter)
RT @QGallouedec: 🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️
This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇
More here 👉 hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 🔥
RT @QGallouedec: 🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️
This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇
More here 👉 hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 🔥
Hugging Face (Twitter)
RT @Tu7uruu: 🚀 Big update: Open ASR goes multilingual!
We’re kicking off with 🇩🇪🇫🇷🇮🇹🇪🇸🇵🇹 — German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so we’re exploring new languages 🌍
More languages coming soon... Which one should we add next?
RT @Tu7uruu: 🚀 Big update: Open ASR goes multilingual!
We’re kicking off with 🇩🇪🇫🇷🇮🇹🇪🇸🇵🇹 — German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so we’re exploring new languages 🌍
More languages coming soon... Which one should we add next?
Hugging Face (Twitter)
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source 🚀 We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
RT @kadirnardev: We have released our LFM2-350M based TTS model as open source 🚀 We have also released many different FT models.
GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface
I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain
#opensource #speech #tts #huggingface #lfm #gpu
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jetbrains: We didn’t just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there 👌
Find out more about Mellum here: jb.gg/mbz8bq
RT @jetbrains: We didn’t just build Mellum for us.
We open-sourced it for everyone.
Props to @huggingface for helping us get it out there 👌
Find out more about Mellum here: jb.gg/mbz8bq
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @mervenoyann: how does DINOv3 perceive objects? 👀
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects 🤗
live on @huggingface Spaces
RT @mervenoyann: how does DINOv3 perceive objects? 👀
I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects 🤗
live on @huggingface Spaces
Hugging Face (Twitter)
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source 🔥
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed 💥
Available on Hugging Face, go check them out today!
RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source 🔥
- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard
- CC-BY licensed 💥
Available on Hugging Face, go check them out today!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
Hugging Face (Twitter)
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face 🤯 https://twitter.com/reach_vb/status/1957148807562723809#m
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face 🤯 https://twitter.com/reach_vb/status/1957148807562723809#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works 👇
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works 👇
Hugging Face (Twitter)
RT @arundhati1504: 🎉 Introducing Granary — a 1M-hour, open multilingual speech dataset — plus new #opensource ASR models. 🌍
.
🤗 Now on HuggingFace: nvda.ws/3Jg0BwV
🔗 Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
RT @arundhati1504: 🎉 Introducing Granary — a 1M-hour, open multilingual speech dataset — plus new #opensource ASR models. 🌍
.
🤗 Now on HuggingFace: nvda.ws/3Jg0BwV
🔗 Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
X (formerly Twitter)
#opensource - Search / X
See posts about #opensource on X. See what people are saying and join the conversation.
Hugging Face (Twitter)
RT @AdinaYakup: Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!🫡
Meet Qwen Image Edit 🔥 the image editing version of Qwen-Image by @Alibaba_Qwen
https://huggingface.co/Qwen/Qwen-Image-Edit
✨ Apache 2.0
✨ Semantic + Appearance Editing: rotate, restyle, add/remove 🎨
✨ Precise Text Editing → edit CN/EN text, keep style
RT @AdinaYakup: Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!🫡
Meet Qwen Image Edit 🔥 the image editing version of Qwen-Image by @Alibaba_Qwen
https://huggingface.co/Qwen/Qwen-Image-Edit
✨ Apache 2.0
✨ Semantic + Appearance Editing: rotate, restyle, add/remove 🎨
✨ Precise Text Editing → edit CN/EN text, keep style
huggingface.co
Qwen/Qwen-Image-Edit · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @VentureBeat: Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
RT @VentureBeat: Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
VentureBeat
Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
Developers are free to create and distribute derivative models. Importantly, Nvidia does not claim ownership of any outputs generated...
Hugging Face (Twitter)
RT @ClementDelangue: Do we have an org or posts with all the cool Japanese releases on HF? Which ones are the most interesting ones?
RT @ClementDelangue: Do we have an org or posts with all the cool Japanese releases on HF? Which ones are the most interesting ones?
vxTwitter / fixvx
💖 172 🔁 23
💖 172 🔁 23
あるふ (@alfredplpl)
Hugging Face、もっと日本ではやるべきだと思うんだよね。やはり俺がハギングフェイスジャパンを作るしかないのか
Hugging Face (Twitter)
RT @gm8xx8: NVIDIA Nemotron-Nano v2
Models: 12B Base, 9B Reasoning, 9B Base
- Arch: Hybrid Mamba2–Transformer (128K ctx, 4 attn layers)
- Training: 10.6T tokens (3.5T synthetic from DeepSeek, Qwen, Nemotron-4, phi-4, etc.)
- 15 natural languages + 43 programming languages
- Datasets: Nemotron-CC v2 + Nemotron-CC-Math (133B tokens, 5.5× FineMath)
Benchmarks
- Math: 91.4 GSM8K CoT, 63.6 MATH L5, +30→56.7 AIME
- Code: 58.5 HumanEval+, 58.9 MBPP+
- Commonsense: 90.7 ARC, 79.9 HellaSwag
- Long-context: 82.2 RULER-128K
Highlights
- Nemotron-CC-Math: First scalable pipeline using Lynx + LLM cleanup to preserve LaTeX + code in web data. Delivers SOTA boosts (+12.6 MATH, +14.3 MBPP+) vs prior open math sets
- Efficiency: Distilled 12B→9B (480B tokens), ~1.5e24 FLOPs, ~724 MWh disclosed
- Deployment: Hugging Face, NGC, NeMo, TRT-LLM, vLLM | GPU-optimized
- Open: Models, datasets, and full extraction pipelines released
RT @gm8xx8: NVIDIA Nemotron-Nano v2
Models: 12B Base, 9B Reasoning, 9B Base
- Arch: Hybrid Mamba2–Transformer (128K ctx, 4 attn layers)
- Training: 10.6T tokens (3.5T synthetic from DeepSeek, Qwen, Nemotron-4, phi-4, etc.)
- 15 natural languages + 43 programming languages
- Datasets: Nemotron-CC v2 + Nemotron-CC-Math (133B tokens, 5.5× FineMath)
Benchmarks
- Math: 91.4 GSM8K CoT, 63.6 MATH L5, +30→56.7 AIME
- Code: 58.5 HumanEval+, 58.9 MBPP+
- Commonsense: 90.7 ARC, 79.9 HellaSwag
- Long-context: 82.2 RULER-128K
Highlights
- Nemotron-CC-Math: First scalable pipeline using Lynx + LLM cleanup to preserve LaTeX + code in web data. Delivers SOTA boosts (+12.6 MATH, +14.3 MBPP+) vs prior open math sets
- Efficiency: Distilled 12B→9B (480B tokens), ~1.5e24 FLOPs, ~724 MWh disclosed
- Deployment: Hugging Face, NGC, NeMo, TRT-LLM, vLLM | GPU-optimized
- Open: Models, datasets, and full extraction pipelines released