This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: I made a vibe coding game engine
the problem: people are trying to vibe code games. it kind of works at first, but as the project grows, things begin to fall apart
why? and what can we do about it? ⤵️
step 1.
RT @dylan_ebert_: I made a vibe coding game engine
the problem: people are trying to vibe code games. it kind of works at first, but as the project grows, things begin to fall apart
why? and what can we do about it? ⤵️
step 1.
Hugging Face (Twitter)
RT @NousResearch: Starting today, Psyche will train 6 new models in parallel in pursuit of creating world class open source AI.
These runs serve as the starting point for future experiments and a more thorough and empirical training process.
RT @NousResearch: Starting today, Psyche will train 6 new models in parallel in pursuit of creating world class open source AI.
These runs serve as the starting point for future experiments and a more thorough and empirical training process.
Hugging Face (Twitter)
RT @osanseviero: Welcome to TimesFM 2.5, a pre-trained model for times-series forecasting, which performs great in zero-shot out-of-the-box
- 200M params (down from 500M)
- 16k context (up from 2k)
- Available on Hugging Face
- Apache 2.0
Happy Monday!
RT @osanseviero: Welcome to TimesFM 2.5, a pre-trained model for times-series forecasting, which performs great in zero-shot out-of-the-box
- 200M params (down from 500M)
- 16k context (up from 2k)
- Available on Hugging Face
- Apache 2.0
Happy Monday!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ting_: DeepSeek-V3.2-Exp is live on @huggingface API, supported by @novita_labs 🤗
⚡ More efficient long-context reasoning
📊 Matches V3.1 performance, even surpasses on tasks like AIME 2025 & Codeforces
📂 Content window 163K
✅ Structured Output, Function Calling, Reasoning
Try👇
RT @ting_: DeepSeek-V3.2-Exp is live on @huggingface API, supported by @novita_labs 🤗
⚡ More efficient long-context reasoning
📊 Matches V3.1 performance, even surpasses on tasks like AIME 2025 & Codeforces
📂 Content window 163K
✅ Structured Output, Function Calling, Reasoning
Try👇
Hugging Face (Twitter)
RT @AntLingAGI: 🚀 Ring-1T-preview: Deep Thinking, No Waiting
The first 1 trillion open-source thinking model
-> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7
-> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5
Still evolving!
RT @AntLingAGI: 🚀 Ring-1T-preview: Deep Thinking, No Waiting
The first 1 trillion open-source thinking model
-> Early results in natural language: AIME25/92.6, HMMT25/84.5, ARC-AGI-1/50.8, LCB/78.3, CF/94.7
-> Solved IMO25 Q3 in one shot, with partial solutions for Q1/Q2/Q4/Q5
Still evolving!
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
huggingface.co
inclusionAI/Ring-1T-preview · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
Hugging Face (Twitter)
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
Hugging Face (Twitter)
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
Hugging Face (Twitter)
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
Hugging Face (Twitter)
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
Hugging Face (Twitter)
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
Hugging Face (Twitter)
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m