Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
RT @BetterStackHQ: Transformers.js lets you run AI models offline in the browser with ONNX + WebGPU. Build a chatbot with Llama 3.2, optimize performance, and try local AI tasks like object detection & background removal.
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
huggingface.co
inclusionAI/Ring-1T-preview · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
Hugging Face (Twitter)
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
Hugging Face (Twitter)
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
Hugging Face (Twitter)
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
Hugging Face (Twitter)
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
Hugging Face (Twitter)
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳