Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
RT @ClementDelangue: Remi plugged @openai GPT-4o to Reachy Mini and it's pretty cool. Check the mirror challenge & chess playing in particular!
Fun new capabilities:
- Image analysis: Reachy Mini can now look at a photo it just took and describe or reason about it
- Face tracking: keeps eye contact and makes interactions feel much more natural
- Motion fusion: [head wobble while speaking] + [face tracking] + [emotions or dances] can now run simultaneously
- Face recognition: runs locally
- Autonomous behaviors when idle: when nothing happens for a while, the model can decide to trigger context-based behaviors
Questions for the community:
• Earlier versions used flute sounds when playing emotions. This one speaks instead (for example the "olala" at the start is an emotion + voice). It completely changes how I perceive the robot (pet? human? kind alien?). Should we keep a toggle to switch between voice and flute sounds?
• How do the response delays...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
RT @AdinaYakup: Ring-1T-preview 🔥 1T thinking model released by @AntLingAGI
https://huggingface.co/inclusionAI/Ring-1T-preview
✨ MoE architecture + 20T tokens + RLVR via ASystem
✨ Strong natural language reasoning (AIME’25: 92.6, close to GPT-5)
✨IMO tests: advanced problem-solving & reasoning
huggingface.co
inclusionAI/Ring-1T-preview · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
RT @reach_vb: LETS FUCKING GOOO! - Use state of the art open LLMs with the simplicity of AI SDK 🤯
Kudos to the team on shipping this! 🤗 https://twitter.com/nishimiya/status/1973032330479669462#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
RT @ting_: 🤗 GLM-4.6 is on @huggingface API! it's supported by @novita_labs
👇Tested it with my old prompt, building a pixel art pilot game, result is impressive!
FYI what's good about the model:
> 200K context window
> Top-tier reasoning & tool use (91.6 on BFCL v2!)
> Killer coding skills
> More agentic & great for agent products
> Human-like tone, perfect for role-play!
Hugging Face (Twitter)
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
Hugging Face (Twitter)
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
Hugging Face (Twitter)
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
Hugging Face (Twitter)
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
Hugging Face (Twitter)
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
Hugging Face (Twitter)
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m
Hugging Face (Twitter)
RT @ArtificialAnlys: IBM has launched Granite 4.0 - a new family of open weights language models ranging in size from 3B to 32B. Artificial Analysis was provided pre-release access, and our benchmarking shows Granite 4.0 H Small (32B/9B total/active parameters) scoring an Intelligence Index of 23, with a particular strength in token efficiency
Today IBM released four new models: Granite 4.0 H Small (32B/9B total/active parameters), Granite 4.0 H Tiny (7B/1B), Granite 4.0 H Micro (3B/3B) and Granite 4.0 Micro (3B/3B). We evaluated Granite 4.0 Small (in non-reasoning mode) and Granite 4.0 Micro using the Artificial Analysis Intelligence Index. Granite 4.0 models combine a small amount of standard transformer-style attention layers with a majority of Mamba layers which claims to reduce memory requirements without impacting performance
Key benchmarking takeaways:
➤🧠 Granite 4.0 H Small Intelligence: In non-reasoning, Granite 4.0 H Small scores 23 on the...
Перейти на оригинальный пост
RT @ArtificialAnlys: IBM has launched Granite 4.0 - a new family of open weights language models ranging in size from 3B to 32B. Artificial Analysis was provided pre-release access, and our benchmarking shows Granite 4.0 H Small (32B/9B total/active parameters) scoring an Intelligence Index of 23, with a particular strength in token efficiency
Today IBM released four new models: Granite 4.0 H Small (32B/9B total/active parameters), Granite 4.0 H Tiny (7B/1B), Granite 4.0 H Micro (3B/3B) and Granite 4.0 Micro (3B/3B). We evaluated Granite 4.0 Small (in non-reasoning mode) and Granite 4.0 Micro using the Artificial Analysis Intelligence Index. Granite 4.0 models combine a small amount of standard transformer-style attention layers with a majority of Mamba layers which claims to reduce memory requirements without impacting performance
Key benchmarking takeaways:
➤🧠 Granite 4.0 H Small Intelligence: In non-reasoning, Granite 4.0 H Small scores 23 on the...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @ClementDelangue: IBM is back! They just joined Hugging Face Enterprise & released Granite 4.0 in open-source with a new hybrid Mamba/transformer architecture that reduces memory requirements without reducing accuracy much.
This set of models is great for agentic workflows like tool calling, document analysis, RAG, especially in an enterprise setup 🚀
The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered by 🤗 TransformersJS!
3B dense hybrid: https://huggingface.co/ibm-granite/granite-4.0-micro
3B MoE with 1B active: https://huggingface.co/ibm-granite/granite-4.0-h-small-base
32B MoE with 9B active: https://huggingface.co/ibm-granite/granite-4.0-h-small
🗂️ Full Model collection: https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c
🔗 In-browser demo: https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU
RT @ClementDelangue: IBM is back! They just joined Hugging Face Enterprise & released Granite 4.0 in open-source with a new hybrid Mamba/transformer architecture that reduces memory requirements without reducing accuracy much.
This set of models is great for agentic workflows like tool calling, document analysis, RAG, especially in an enterprise setup 🚀
The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered by 🤗 TransformersJS!
3B dense hybrid: https://huggingface.co/ibm-granite/granite-4.0-micro
3B MoE with 1B active: https://huggingface.co/ibm-granite/granite-4.0-h-small-base
32B MoE with 9B active: https://huggingface.co/ibm-granite/granite-4.0-h-small
🗂️ Full Model collection: https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c
🔗 In-browser demo: https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @victormustar: another open source win:
opencode + GLM 4.6 is basically Claude Code (used it all day) but insanely cheap + better TUI. And you can use it with your Hugging Face token now 🔥 https://twitter.com/victormustar/status/1935285458394583356#m
RT @victormustar: another open source win:
opencode + GLM 4.6 is basically Claude Code (used it all day) but insanely cheap + better TUI. And you can use it with your Hugging Face token now 🔥 https://twitter.com/victormustar/status/1935285458394583356#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @VoyageAI: To evaluate embeddings and retrieval, we need more benchmarks beyond MTEB that are less vulnerable to overfitting. That’s why RTEB was just beta-launched!
⚖️ Both open and held-out datasets to prevent overfitting to evaluation sets.
🌍 Realistic datasets from critical enterprise domains like law, healthcare, code, and finance.
🔎 Only focus on retrieval applications with relevant large-scale datasets.
Check out the blog and leaderboard on @huggingface and join the community in building a stronger, more reliable benchmark.
Blog: mongodb.social/6013Ai5sz
RT @VoyageAI: To evaluate embeddings and retrieval, we need more benchmarks beyond MTEB that are less vulnerable to overfitting. That’s why RTEB was just beta-launched!
⚖️ Both open and held-out datasets to prevent overfitting to evaluation sets.
🌍 Realistic datasets from critical enterprise domains like law, healthcare, code, and finance.
🔎 Only focus on retrieval applications with relevant large-scale datasets.
Check out the blog and leaderboard on @huggingface and join the community in building a stronger, more reliable benchmark.
Blog: mongodb.social/6013Ai5sz