Hugging Face
101 subscribers
1.02K photos
322 videos
1.71K links
Download Telegram
Hugging Face (Twitter)

๐Ÿ’… adding the final touches, see you in 30 minutes! https://twitter.com/huggingface/status/1991766734664311252#m
Hugging Face (Twitter)

RT @nikhilaravi: SAM 3 is the #1 trending repo on @huggingface! ๐ŸคฉThanks @ClementDelangue for personally informing us!
โ€ŒHugging Face (Twitter)

RT @NVIDIAAIDev: Introducing the Nemotron-Personas Collection ๐Ÿ’ฌ

A set of multilingual, region-specific synthetic persona datasets created with NVIDIA NeMo Data Designer. Each dataset mirrors real-world demographic and geographic distributions to help you fine-tune and evaluate AI systems without exposing personal data.

Now available:
๐Ÿ‡บ๐Ÿ‡ธ Nemotron-Personas-USA: 6M personas
๐Ÿ‡ฏ๐Ÿ‡ต Nemotron-Personas-Japan: 6M personas
๐Ÿ‡ฎ๐Ÿ‡ณ Nemotron-Personas-India: 21M personas

All datasets are open source and licensed under CC BY 4.0.

Explore on @huggingface ๐Ÿ‘‡ nvda.ws/4ocqnRk
Hugging Face (Twitter)

RT @NVIDIARobotics: Join our Robotics Office Hours with the @huggingface LeRobot team on Wednesday, November 26 @ 11 AM PT. ๐Ÿค–

Learn to post-train and evaluate Isaac GR00T N1.5 with the latest LeRobot release and see real-world deployment examples.

๐Ÿ“† Add to Calendar: nvda.ws/48cYeE0
๐Ÿ‘1
Hugging Face (Twitter)

what are you building with @huggingface this weekend? ๐Ÿ‘€

leave them below ๐Ÿค—
Hugging Face (Twitter)

RT @Xianbao_QIAN: Welcome Nex-N1, a new series of agentic foundational models, to @huggingface

- available in different sizes from 8B, 30B, 32B to 671B
- strong in tool-use, web-search and real-world agentic workflow
- some SFT dataset has been open sourced

Technical report come up soon!
Hugging Face (Twitter)

RT @mervenoyann: I'm keeping a track of real-time vision models (mostly detectors) on @huggingface

we have RT-DETR, YOLO, RF-DETR and D-FINE for now

what other models should we add?
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @marlene_zw: Open source/weight models are often used in regulated industries like Health Care or Financial Services, where they handle personally identifiable data, and can't send it to proprietary LLM providers.

We recently chatted to @reach_vb about the partnership VS @code and @huggingface inference providers have to let you use open weight models directly in your IDE! Something that surprised me was just how fast inference providers like @cerebras make it to generate code! In this episode we made a journaling CLI tool using Qwen!

I personally love open source and hope more of these models will become small enough to run locally!
Hugging Face (Twitter)

RT @Xianbao_QIAN: Hunyuan OCR, Tencent's new document-understanding model, is now on @huggingface ๐Ÿš€

- SOTA in document parsing, visual Q&A and Translation
- 1B-parameter, end-to-end
- Interactive demo available
- Tech report released

Model: https://huggingface.co/tencent/HunyuanOCR
Demo: https://huggingface.co/spaces/tencent/HunyuanOCR
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @Tu7uruu: Dia2 is OUT! a streaming text-to-speech model that can generate voice in real time ๐Ÿคฏ

> Generates audio on the fly, no full text input needed
> Real-time conversational streaming (perfect for agents + AI voice apps)
> Prefix conditioning for smoother, contextual voice responses
> Comes in 1B & 2B model sizes
> Fully open-source with Apache 2.0 license
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @bfl_ml: FLUX.2 is here - our most capable image generation & editing model to date.

Multi-reference. 4MP. Production-ready. Open weights.

Into the new.
โ€ŒHugging Face (Twitter)

RT @multimodalart: FLUX.2 is out! ๐Ÿšจ

FLUX.2-dev bing state-of-the-art to open weight, with a chonky 32B parameter model, however, it can run on low-end cards with quantization and the new remote text encoder

Read more on the ๐Ÿงจ diffusers welcomes FLUX-2 blog
huggingface.co/blog/flux-2
Hugging Face (Twitter)

RT @mervenoyann: FLUX.2 is here! ๐ŸŽจ

> single text encoder (Mistral Small 3.1) + DiT
> experiment with different quantization schemes for inference & training (QLoRA) -- more than 80GB VRAM otherwise
> comes with day-0 diffusers support ๐Ÿงจ
Hugging Face (Twitter)

RT @ClementDelangue: Reachy mini is my new podcast assistant! Coming soon with @ti_morse...
Hugging Face (Twitter)

Flux.1-dev has been the second most liked model on Hugging Face just after Deepseek R1 so super excited to see the release of Flux.2-dev by @bfl_ml today!

Download the weights or try the model (thanks to @fal) on @huggingface: https://huggingface.co/black-forest-labs/FLUX.2-dev
Read the blogpost: huggingface.co/blog/flux-2

Letโ€™s go!
Hugging Face (Twitter)

RT @ariG23498: Flux.2 is a BIG BOI ๐Ÿ˜

The inference takes more than 80GB VRAM

A small thread on how one can run it on a NVIDIA L4 (22 GBs of VRAM)? ๐Ÿ˜‰
Hugging Face (Twitter)

RT @mkurman88: Cool, still pretty popular. @OpenMed_AI
โ€ŒHugging Face (Twitter)

RT @AdinaYakup: Daily Papers has gained a lot of attention this past year with all the new updates ๐Ÿ”ฅ

Hereโ€™s a guide that will help you quickly understand whatโ€™s new and make better use of the tool ๐Ÿค—

https://huggingface.co/blog/AdinaY/a-guide-to-hugging-faces-papers-page