Hugging Face
87 subscribers
789 photos
268 videos
1.34K links
Download Telegram
Hugging Face (Twitter)

RT @LysandreJik: oLLM: a lightweight Python library for LLM inference build on top of transformers πŸ”₯

Run qwen3-next-80B, GPT-OSS, Llama3, on consumer hardware. Awesome work by Anuar!
Hugging Face (Twitter)

RT @ailozovskaya: Reachy Mini was on stage for the first time! @TEDAIVienna

It proved it can be a real improv actor! Did you see it? What did you think of the show? Maybe it’s the first robot actor πŸ™ŒπŸ»
https://huggingface.co/blog/reachy-mini
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @ClementDelangue: As Jensen mentioned with @altcap @BG2Pod @bgurley, something that few people know is that @nvidia is becoming the American open-source leader in AI, with over 300 contributions of models, datasets and apps on @huggingface in the past year.

And I have a feeling they're just getting started!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @charmcli: If you love open models, you’ll love this: Crush now runs with @huggingface Inference Providers πŸ€—βœ¨
Hugging Face (Twitter)

RT @Xianbao_QIAN: The largest 80B open source image generation model has been dropped on @huggingface !

Review video by AIWood below. https://twitter.com/Xianbao_QIAN/status/1971577053872099791#m
Hugging Face (Twitter)

RT @RisingSayak: Feeling so happy that we got accepted to #NeurIPS2025 😭

This was a genuinely fulfilling piece of work, and a lot of knobs needed tinkering with.

Check out the thread below for more details! https://twitter.com/RisingSayak/status/1933481434020565437#m
Hugging Face (Twitter)

RT @RisingSayak: Today, we're shipping native support for context-parallelism to help make diffusion inference go brrr on multiple GPUs πŸš€

Our CP API is made to work with two flavors of distributed attention: Ring & Ulysses.

Huge thanks to @aryanvs_ for shipping this!

Deets ⬇️
Hugging Face (Twitter)

RT @Xianbao_QIAN: MinerU 2.5 has arrived with demo on @huggingface
Hugging Face (Twitter)

RT @not_so_lain: I’m both honored and humbled to have crossed 3.000 followers on @huggingface πŸ”₯
When I first started, I never imagined this community would become such a big part of my journey.

Thank you to everyone who has read my work or collaborated with me. Your support keeps me going✨
Hugging Face (Twitter)

RT @ClementDelangue: The gdpval dataset from @OpenAI is number one trending on @huggingface this week!
β€ŒHugging Face (Twitter)

RT @linoy_tsaban: still getting over the fact HunyuanImage 3.0 is here (less than a month since HunyuanIamge 2.1) and then I see it's 80B params 🀯
+ Image editing is coming πŸ‘€
FUN TIMES
https://huggingface.co/tencent/HunyuanImage-3.0
Hugging Face (Twitter)

RT @multimodalart: the 𝑳𝒐𝑹𝑨 𝔣𝔯𝔒𝔷𝔫𝔦 is LIVE

Train Qwen, Wan and FLUX LoRAs for free for 1 week (Sep 29 - Oct 6th)

We cobbled together @ostrisai AI Toolkit & the new @huggingface Jobs API together
Hugging Face (Twitter)

RT @Saboo_Shubham_: oLLM is a lightweight Python library for local large-context LLM inference.

Run gpt-oss-20B, Qwen3-next-80B, Llama-3.1-8B on ~$200 consumer GPU with just 8GB VRAM. And this is without any quantization - only fp16/bf16 precision.

100% Opensource.
Hugging Face (Twitter)

RT @_akhaliq: HunyuanImage 3.0 is out on Hugging Face

A Powerful Native Multimodal Model for Image Generation

80B parameters, Largest Image Generation MoE Model

Reasons with world knowledge

Generates text within images

vibe coded a text to image app with anycoder using @fal