Hugging Face
62 subscribers
651 photos
231 videos
1.12K links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! ๐Ÿคฏ
๐Ÿค Highly compact & efficient
๐Ÿค– Strong instruction-following capabilities
๐Ÿ”ง Perfect candidate for fine-tuning

It's so tiny that it can even run 100% locally in your browser with Transformers.js! ๐Ÿค—
Hugging Face (Twitter)

RT @QGallouedec: ๐Ÿšจ Big news! We decided that @huggingfaceโ€™s post-training library, TRL, will natively supports training Vision Language Models ๐Ÿ–ผ๏ธ

This builds on our recent VLM support in SFTTrainer โ€” and weโ€™re not stopping until TRL is the #1 VLM training library ๐Ÿฅ‡

More here ๐Ÿ‘‰ hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 ๐Ÿ”ฅ
Hugging Face (Twitter)

RT @Tu7uruu: ๐Ÿš€ Big update: Open ASR goes multilingual!

Weโ€™re kicking off with ๐Ÿ‡ฉ๐Ÿ‡ช๐Ÿ‡ซ๐Ÿ‡ท๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ช๐Ÿ‡ธ๐Ÿ‡ต๐Ÿ‡น โ€” German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so weโ€™re exploring new languages ๐ŸŒ

More languages coming soon... Which one should we add next?
Hugging Face (Twitter)

RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @kadirnardev: We have released our LFM2-350M based TTS model as open source ๐Ÿš€ We have also released many different FT models.

GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface

I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain

#opensource #speech #tts #huggingface #lfm #gpu
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @jetbrains: We didnโ€™t just build Mellum for us.

We open-sourced it for everyone.

Props to @huggingface for helping us get it out there ๐Ÿ‘Œ

Find out more about Mellum here: jb.gg/mbz8bq
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @mervenoyann: how does DINOv3 perceive objects? ๐Ÿ‘€

I dropped a mini visualizer: you can upload images, click on objects and check
> patch similarities
> object boundaries
> most similar other objects ๐Ÿค—

live on @huggingface Spaces
Hugging Face (Twitter)

RT @reach_vb: NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models - Multilingual, Open Source ๐Ÿ”ฅ

- 1B and 600M parameters
- 25 languages
- automatic language detection and translation
- word and sentence timestamps
- transcribe up to 3 hours of audio in one go
- trained on 1 Million hours of data
- SoTA on Open ASR Leaderboard

- CC-BY licensed ๐Ÿ’ฅ

Available on Hugging Face, go check them out today!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free

- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.

Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
Hugging Face (Twitter)

RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face ๐Ÿคฏ https://twitter.com/reach_vb/status/1957148807562723809#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @dylan_ebert_: I automated my research discovery.

Claude Code + Hugging Face MCP + Research MCP (my server)

It makes discovering and keeping track of all related research artifacts MUCH faster and easier

here's how it works ๐Ÿ‘‡
โ€ŒHugging Face (Twitter)

RT @arundhati1504: ๐ŸŽ‰ Introducing Granary โ€” a 1M-hour, open multilingual speech dataset โ€” plus new #opensource ASR models. ๐ŸŒ
.
๐Ÿค— Now on HuggingFace: nvda.ws/3Jg0BwV

๐Ÿ”— Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
โ€ŒHugging Face (Twitter)

RT @AdinaYakup: Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!๐Ÿซก

Meet Qwen Image Edit ๐Ÿ”ฅ the image editing version of Qwen-Image by @Alibaba_Qwen
https://huggingface.co/Qwen/Qwen-Image-Edit

โœจ Apache 2.0
โœจ Semantic + Appearance Editing: rotate, restyle, add/remove ๐ŸŽจ
โœจ Precise Text Editing โ†’ edit CN/EN text, keep style
Hugging Face (Twitter)

RT @gm8xx8: NVIDIA Nemotron-Nano v2

Models: 12B Base, 9B Reasoning, 9B Base
- Arch: Hybrid Mamba2โ€“Transformer (128K ctx, 4 attn layers)
- Training: 10.6T tokens (3.5T synthetic from DeepSeek, Qwen, Nemotron-4, phi-4, etc.)
- 15 natural languages + 43 programming languages
- Datasets: Nemotron-CC v2 + Nemotron-CC-Math (133B tokens, 5.5ร— FineMath)

Benchmarks
- Math: 91.4 GSM8K CoT, 63.6 MATH L5, +30โ†’56.7 AIME
- Code: 58.5 HumanEval+, 58.9 MBPP+
- Commonsense: 90.7 ARC, 79.9 HellaSwag
- Long-context: 82.2 RULER-128K

Highlights
- Nemotron-CC-Math: First scalable pipeline using Lynx + LLM cleanup to preserve LaTeX + code in web data. Delivers SOTA boosts (+12.6 MATH, +14.3 MBPP+) vs prior open math sets
- Efficiency: Distilled 12Bโ†’9B (480B tokens), ~1.5e24 FLOPs, ~724 MWh disclosed
- Deployment: Hugging Face, NGC, NeMo, TRT-LLM, vLLM | GPU-optimized
- Open: Models, datasets, and full extraction pipelines released
Hugging Face (Twitter)

RT @ctnzr: Today we're releasing NVIDIA Nemotron Nano v2 - a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate.

Along with this model, we are also releasing most of the data we used to create it, including the pretraining corpus.

Links to the models, datasets, and tech report are here:

https://research.nvidia.com/labs/adlr/NVIDIA-Nemotron-Nano-2/
Hugging Face (Twitter)

RT @NielsRogge: Ok ngl this is cool! The end of LoRa's??

Powered by @FAL as inference provider. Try it out below! https://twitter.com/Alibaba_Qwen/status/1957500569029079083#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @maximelabonne: LFM2-VL support with GGUF and llama.cpp ๐Ÿฅณ

You can now run these tiny, hyper-efficient VLMs on your watch!

We released quantized checkpoints for LFM2-VL-450M and LFM2-VL-1.6B on @huggingface