Hugging Face (Twitter)
RT @RisingSayak: Today, we're shipping native support for context-parallelism to help make diffusion inference go brrr on multiple GPUs π
Our CP API is made to work with two flavors of distributed attention: Ring & Ulysses.
Huge thanks to @aryanvs_ for shipping this!
Deets β¬οΈ
RT @RisingSayak: Today, we're shipping native support for context-parallelism to help make diffusion inference go brrr on multiple GPUs π
Our CP API is made to work with two flavors of distributed attention: Ring & Ulysses.
Huge thanks to @aryanvs_ for shipping this!
Deets β¬οΈ
βHugging Face (Twitter)
RT @Shekswess: Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card:
RT @Shekswess: Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card:
X (formerly Twitter)
Shekswess (@Shekswess) on X
Tiny Reasoning Language Model (trlm-135) β‘
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card: https://t.co/PiXhLyJbH8
A 135M parameter experiment to see if small models can learn structured reasoning with the right data + training strategy.
π³ Model Card: https://t.co/PiXhLyJbH8
Hugging Face (Twitter)
RT @not_so_lain: Iβm both honored and humbled to have crossed 3.000 followers on @huggingface π₯
When I first started, I never imagined this community would become such a big part of my journey.
Thank you to everyone who has read my work or collaborated with me. Your support keeps me goingβ¨
RT @not_so_lain: Iβm both honored and humbled to have crossed 3.000 followers on @huggingface π₯
When I first started, I never imagined this community would become such a big part of my journey.
Thank you to everyone who has read my work or collaborated with me. Your support keeps me goingβ¨
Hugging Face (Twitter)
RT @ClementDelangue: The gdpval dataset from @OpenAI is number one trending on @huggingface this week!
RT @ClementDelangue: The gdpval dataset from @OpenAI is number one trending on @huggingface this week!
βHugging Face (Twitter)
RT @linoy_tsaban: still getting over the fact HunyuanImage 3.0 is here (less than a month since HunyuanIamge 2.1) and then I see it's 80B params π€―
+ Image editing is coming π
FUN TIMES
https://huggingface.co/tencent/HunyuanImage-3.0
RT @linoy_tsaban: still getting over the fact HunyuanImage 3.0 is here (less than a month since HunyuanIamge 2.1) and then I see it's 80B params π€―
+ Image editing is coming π
FUN TIMES
https://huggingface.co/tencent/HunyuanImage-3.0
huggingface.co
tencent/HunyuanImage-3.0 Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @multimodalart: the π³ππΉπ¨ π£π―π’π·π«π¦ is LIVE
Train Qwen, Wan and FLUX LoRAs for free for 1 week (Sep 29 - Oct 6th)
We cobbled together @ostrisai AI Toolkit & the new @huggingface Jobs API together
RT @multimodalart: the π³ππΉπ¨ π£π―π’π·π«π¦ is LIVE
Train Qwen, Wan and FLUX LoRAs for free for 1 week (Sep 29 - Oct 6th)
We cobbled together @ostrisai AI Toolkit & the new @huggingface Jobs API together
Hugging Face (Twitter)
RT @Saboo_Shubham_: oLLM is a lightweight Python library for local large-context LLM inference.
Run gpt-oss-20B, Qwen3-next-80B, Llama-3.1-8B on ~$200 consumer GPU with just 8GB VRAM. And this is without any quantization - only fp16/bf16 precision.
100% Opensource.
RT @Saboo_Shubham_: oLLM is a lightweight Python library for local large-context LLM inference.
Run gpt-oss-20B, Qwen3-next-80B, Llama-3.1-8B on ~$200 consumer GPU with just 8GB VRAM. And this is without any quantization - only fp16/bf16 precision.
100% Opensource.
Hugging Face (Twitter)
RT @_akhaliq: HunyuanImage 3.0 is out on Hugging Face
A Powerful Native Multimodal Model for Image Generation
80B parameters, Largest Image Generation MoE Model
Reasons with world knowledge
Generates text within images
vibe coded a text to image app with anycoder using @fal
RT @_akhaliq: HunyuanImage 3.0 is out on Hugging Face
A Powerful Native Multimodal Model for Image Generation
80B parameters, Largest Image Generation MoE Model
Reasons with world knowledge
Generates text within images
vibe coded a text to image app with anycoder using @fal
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: Introducing Hugging Face Inference Provider Starter App π₯
Built w/ @nextjs and @OpenAIDevs SDK - powered by SoTA open-weights LLMs π€
Create seamless experiences without worrying about boilerplate via streaming and structured outputs
Check out the app in comments below π
RT @reach_vb: Introducing Hugging Face Inference Provider Starter App π₯
Built w/ @nextjs and @OpenAIDevs SDK - powered by SoTA open-weights LLMs π€
Create seamless experiences without worrying about boilerplate via streaming and structured outputs
Check out the app in comments below π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: I made a vibe coding game engine
the problem: people are trying to vibe code games. it kind of works at first, but as the project grows, things begin to fall apart
why? and what can we do about it? ‡οΈ
step 1.
RT @dylan_ebert_: I made a vibe coding game engine
the problem: people are trying to vibe code games. it kind of works at first, but as the project grows, things begin to fall apart
why? and what can we do about it? ‡οΈ
step 1.
Hugging Face (Twitter)
RT @NousResearch: Starting today, Psyche will train 6 new models in parallel in pursuit of creating world class open source AI.
These runs serve as the starting point for future experiments and a more thorough and empirical training process.
RT @NousResearch: Starting today, Psyche will train 6 new models in parallel in pursuit of creating world class open source AI.
These runs serve as the starting point for future experiments and a more thorough and empirical training process.
Hugging Face (Twitter)
RT @osanseviero: Welcome to TimesFM 2.5, a pre-trained model for times-series forecasting, which performs great in zero-shot out-of-the-box
- 200M params (down from 500M)
- 16k context (up from 2k)
- Available on Hugging Face
- Apache 2.0
Happy Monday!
RT @osanseviero: Welcome to TimesFM 2.5, a pre-trained model for times-series forecasting, which performs great in zero-shot out-of-the-box
- 200M params (down from 500M)
- 16k context (up from 2k)
- Available on Hugging Face
- Apache 2.0
Happy Monday!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ting_: DeepSeek-V3.2-Exp is live on @huggingface API, supported by @novita_labs π€
β‘ More efficient long-context reasoning
π Matches V3.1 performance, even surpasses on tasks like AIME 2025 & Codeforces
π Content window 163K
β Structured Output, Function Calling, Reasoning
Tryπ
RT @ting_: DeepSeek-V3.2-Exp is live on @huggingface API, supported by @novita_labs π€
β‘ More efficient long-context reasoning
π Matches V3.1 performance, even surpasses on tasks like AIME 2025 & Codeforces
π Content window 163K
β Structured Output, Function Calling, Reasoning
Tryπ
Hugging Face (Twitter)
RT @AntLingAGI: π Ring-1T-preview: Deep Thinking, No Waiting
The first 1 trillion open-source thinking model
-> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7
-> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5
Still evolving!
RT @AntLingAGI: π Ring-1T-preview: Deep Thinking, No Waiting
The first 1 trillion open-source thinking model
-> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7
-> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5
Still evolving!
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
β’ Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
β’ How do the response delays...
ΠΠ΅ΡΠ΅ΠΉΡΠΈ Π½Π° ΠΎΡΠΈΠ³ΠΈΠ½Π°Π»ΡΠ½ΡΠΉ ΠΏΠΎΡΡ
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
β’ Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
β’ How do the response delays...
ΠΠ΅ΡΠ΅ΠΉΡΠΈ Π½Π° ΠΎΡΠΈΠ³ΠΈΠ½Π°Π»ΡΠ½ΡΠΉ ΠΏΠΎΡΡ
βHugging Face (Twitter)
RT @AdinaYakup: Ring-1T-preview π₯ 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
β¨ MoE architecture + 20T tokens + RLVR via ASystem
β¨ Strong natural language reasoning (AIMEβ25: 92.6, close to GPT-5)
β¨IMO tests: advanced problem-solving & reasoning
RT @AdinaYakup: Ring-1T-preview π₯ 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
β¨ MoE architecture + 20T tokens + RLVR via ASystem
β¨ Strong natural language reasoning (AIMEβ25: 92.6, close to GPT-5)
β¨IMO tests: advanced problem-solving & reasoning
huggingface.co
inclusionAI/Ring-1T-preview Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK π€―
Kudos to the team on shipping this! π€ https://twitter.com/nishimiya/status/1973032330479669462#m
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK π€―
Kudos to the team on shipping this! π€ https://twitter.com/nishimiya/status/1973032330479669462#m