Hugging Face (Twitter)
RT @LysandreJik: oLLM: a lightweight Python library for LLM inference build on top of transformers π₯
Run qwen3-next-80B, GPT-OSS, Llama3, on consumer hardware. Awesome work by Anuar!
RT @LysandreJik: oLLM: a lightweight Python library for LLM inference build on top of transformers π₯
Run qwen3-next-80B, GPT-OSS, Llama3, on consumer hardware. Awesome work by Anuar!
Hugging Face (Twitter)
RT @ailozovskaya: Reachy Mini was on stage for the first time! @TEDAIVienna
It proved it can be a real improv actor! Did you see it? What did you think of the show? Maybe itβs the first robot actor ππ»
https://huggingface.co/blog/reachy-mini
RT @ailozovskaya: Reachy Mini was on stage for the first time! @TEDAIVienna
It proved it can be a real improv actor! Did you see it? What did you think of the show? Maybe itβs the first robot actor ππ»
https://huggingface.co/blog/reachy-mini
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: As Jensen mentioned with @altcap @BG2Pod @bgurley, something that few people know is that @nvidia is becoming the American open-source leader in AI, with over 300 contributions of models, datasets and apps on @huggingface in the past year.
And I have a feeling they're just getting started!
RT @ClementDelangue: As Jensen mentioned with @altcap @BG2Pod @bgurley, something that few people know is that @nvidia is becoming the American open-source leader in AI, with over 300 contributions of models, datasets and apps on @huggingface in the past year.
And I have a feeling they're just getting started!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @charmcli: If you love open models, youβll love this: Crush now runs with @huggingface Inference Providers π€β¨
RT @charmcli: If you love open models, youβll love this: Crush now runs with @huggingface Inference Providers π€β¨
Hugging Face (Twitter)
RT @Xianbao_QIAN: The largest 80B open source image generation model has been dropped on @huggingface !
Review video by AIWood below. https://twitter.com/Xianbao_QIAN/status/1971577053872099791#m
RT @Xianbao_QIAN: The largest 80B open source image generation model has been dropped on @huggingface !
Review video by AIWood below. https://twitter.com/Xianbao_QIAN/status/1971577053872099791#m
Hugging Face (Twitter)
RT @RisingSayak: Feeling so happy that we got accepted to #NeurIPS2025 π
This was a genuinely fulfilling piece of work, and a lot of knobs needed tinkering with.
Check out the thread below for more details! https://twitter.com/RisingSayak/status/1933481434020565437#m
RT @RisingSayak: Feeling so happy that we got accepted to #NeurIPS2025 π
This was a genuinely fulfilling piece of work, and a lot of knobs needed tinkering with.
Check out the thread below for more details! https://twitter.com/RisingSayak/status/1933481434020565437#m
Hugging Face (Twitter)
RT @RisingSayak: Today, we're shipping native support for context-parallelism to help make diffusion inference go brrr on multiple GPUs π
Our CP API is made to work with two flavors of distributed attention: Ring & Ulysses.
Huge thanks to @aryanvs_ for shipping this!
Deets β¬οΈ
RT @RisingSayak: Today, we're shipping native support for context-parallelism to help make diffusion inference go brrr on multiple GPUs π
Our CP API is made to work with two flavors of distributed attention: Ring & Ulysses.
Huge thanks to @aryanvs_ for shipping this!
Deets β¬οΈ
βHugging Face (Twitter)
RT @Shekswess: Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card:
RT @Shekswess: Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card:
X (formerly Twitter)
Shekswess (@Shekswess) on X
Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card: https://t.co/PiXhLyJbH8
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card: https://t.co/PiXhLyJbH8
Hugging Face (Twitter)
RT @not_so_lain: Iβm both honored and humbled to have crossed 3.000 followers on @huggingface π₯
When I first started, I never imagined this community would become such a big part of my journey.
Thank you to everyone who has read my work or collaborated with me. Your support keeps me goingβ¨
RT @not_so_lain: Iβm both honored and humbled to have crossed 3.000 followers on @huggingface π₯
When I first started, I never imagined this community would become such a big part of my journey.
Thank you to everyone who has read my work or collaborated with me. Your support keeps me goingβ¨
Hugging Face (Twitter)
RT @ClementDelangue: The gdpval dataset from @OpenAI is number one trending on @huggingface this week!
RT @ClementDelangue: The gdpval dataset from @OpenAI is number one trending on @huggingface this week!
βHugging Face (Twitter)
RT @linoy_tsaban: still getting over the fact HunyuanImage 3.0 is here (less than a month since HunyuanIamge 2.1) and then I see it's 80B params π€―
+ Image editing is coming π
FUN TIMES
https://huggingface.co/tencent/HunyuanImage-3.0
RT @linoy_tsaban: still getting over the fact HunyuanImage 3.0 is here (less than a month since HunyuanIamge 2.1) and then I see it's 80B params π€―
+ Image editing is coming π
FUN TIMES
https://huggingface.co/tencent/HunyuanImage-3.0
huggingface.co
tencent/HunyuanImage-3.0 Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @multimodalart: the π³ππΉπ¨ π£π―π’π·π«π¦ is LIVE
Train Qwen, Wan and FLUX LoRAs for free for 1 week (Sep 29 - Oct 6th)
We cobbled together @ostrisai AI Toolkit & the new @huggingface Jobs API together
RT @multimodalart: the π³ππΉπ¨ π£π―π’π·π«π¦ is LIVE
Train Qwen, Wan and FLUX LoRAs for free for 1 week (Sep 29 - Oct 6th)
We cobbled together @ostrisai AI Toolkit & the new @huggingface Jobs API together
Hugging Face (Twitter)
RT @Saboo_Shubham_: oLLM is a lightweight Python library for local large-context LLM inference.
Run gpt-oss-20B, Qwen3-next-80B, Llama-3.1-8B on ~$200 consumer GPU with just 8GB VRAM. And this is without any quantization - only fp16/bf16 precision.
100% Opensource.
RT @Saboo_Shubham_: oLLM is a lightweight Python library for local large-context LLM inference.
Run gpt-oss-20B, Qwen3-next-80B, Llama-3.1-8B on ~$200 consumer GPU with just 8GB VRAM. And this is without any quantization - only fp16/bf16 precision.
100% Opensource.
Hugging Face (Twitter)
RT @_akhaliq: HunyuanImage 3.0 is out on Hugging Face
A Powerful Native Multimodal Model for Image Generation
80B parameters, Largest Image Generation MoE Model
Reasons with world knowledge
Generates text within images
vibe coded a text to image app with anycoder using @fal
RT @_akhaliq: HunyuanImage 3.0 is out on Hugging Face
A Powerful Native Multimodal Model for Image Generation
80B parameters, Largest Image Generation MoE Model
Reasons with world knowledge
Generates text within images
vibe coded a text to image app with anycoder using @fal