This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @lukas_m_ziegler: Imagine having a ping pong robot! 🏓
Researchers and developers building physical AI: meet Reachy 2 from @pollenrobotics, an open-source, humanoid robot for real-world experimentation.
It’s a bimanual mobile manipulator: each 7-DOF arm mimics human proportions and can lift up to 3 kg, giving dexterity for object handling.
It can be controlled with Python and ROS2 Humble, or go straight into VR teleoperation, use a headset to move Reachy’s arms, hands, and head, and see through its cameras as if you’re in the robot’s own body.
Want it to move around? A mobile base with three omnidirectional wheels, rich sensors, and LiDAR lets Reachy 2 navigate and explore its surroundings smoothly. 🗺️
Under the hood, it’s powered by a CPU system that’s ready for machine learning, perfect for loading AI frameworks and testing new models from @huggingface directly on the robot.
Keep making robots more, and more accessible Pollen team!
... and...
Перейти на оригинальный пост
RT @lukas_m_ziegler: Imagine having a ping pong robot! 🏓
Researchers and developers building physical AI: meet Reachy 2 from @pollenrobotics, an open-source, humanoid robot for real-world experimentation.
It’s a bimanual mobile manipulator: each 7-DOF arm mimics human proportions and can lift up to 3 kg, giving dexterity for object handling.
It can be controlled with Python and ROS2 Humble, or go straight into VR teleoperation, use a headset to move Reachy’s arms, hands, and head, and see through its cameras as if you’re in the robot’s own body.
Want it to move around? A mobile base with three omnidirectional wheels, rich sensors, and LiDAR lets Reachy 2 navigate and explore its surroundings smoothly. 🗺️
Under the hood, it’s powered by a CPU system that’s ready for machine learning, perfect for loading AI frameworks and testing new models from @huggingface directly on the robot.
Keep making robots more, and more accessible Pollen team!
... and...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @pollenrobotics: 🏓 After mastering chess, xylophone and Jenga towers, Reachy 2 is now taking on ping-pong!
Low-latency teleoperation allows the operator to react quickly enough to return the ball.
RT @pollenrobotics: 🏓 After mastering chess, xylophone and Jenga towers, Reachy 2 is now taking on ping-pong!
Low-latency teleoperation allows the operator to react quickly enough to return the ball.
Hugging Face (Twitter)
RT @RisingSayak: Wrote an FA3 attention processor for @Alibaba_Qwen Image using the 🤗 Kernels library. The process is so enjoyable!
Stuff cooking stuff coming 🥠
https://gist.github.com/sayakpaul/ff715f979793d4d44beb68e5e08ee067
RT @RisingSayak: Wrote an FA3 attention processor for @Alibaba_Qwen Image using the 🤗 Kernels library. The process is so enjoyable!
Stuff cooking stuff coming 🥠
https://gist.github.com/sayakpaul/ff715f979793d4d44beb68e5e08ee067
Hugging Face (Twitter)
RT @HuggingPapers: xAI just released Grok 2 on Hugging Face.
This massive 500GB model, a core part of xAI's 2024 work,
is now openly available to push the boundaries of AI research.
https://huggingface.co/xai-org/grok-2
RT @HuggingPapers: xAI just released Grok 2 on Hugging Face.
This massive 500GB model, a core part of xAI's 2024 work,
is now openly available to push the boundaries of AI research.
https://huggingface.co/xai-org/grok-2
huggingface.co
xai-org/grok-2 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @elonmusk: The @xai Grok 2.5 model, which was our best model last year, is now open source.
Grok 3 will be made open source in about 6 months.
https://huggingface.co/xai-org/grok-2
RT @elonmusk: The @xai Grok 2.5 model, which was our best model last year, is now open source.
Grok 3 will be made open source in about 6 months.
https://huggingface.co/xai-org/grok-2
huggingface.co
xai-org/grok-2 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @eliebakouch: Grok2 is open source now and available on Hugging Face. I have 2 questions:
- wtf is `model_type: doge`
- wtf is this rope theta value
RT @eliebakouch: Grok2 is open source now and available on Hugging Face. I have 2 questions:
- wtf is `model_type: doge`
- wtf is this rope theta value
Hugging Face (Twitter)
RT @ClementDelangue: Grok 2 from @xai has just been released on @huggingface: https://huggingface.co/xai-org/grok-2
RT @ClementDelangue: Grok 2 from @xai has just been released on @huggingface: https://huggingface.co/xai-org/grok-2
Hugging Face (Twitter)
RT @Teknium1: .@xai’s Grok 2 weights have been released on @huggingface
https://huggingface.co/xai-org/grok-2
RT @Teknium1: .@xai’s Grok 2 weights have been released on @huggingface
https://huggingface.co/xai-org/grok-2
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @rohanpaul_ai: Hunyuan 3D-2.1 turns any flat image into studio-quality 3D models.
And you can do it on this @huggingface space for free.
RT @rohanpaul_ai: Hunyuan 3D-2.1 turns any flat image into studio-quality 3D models.
And you can do it on this @huggingface space for free.
Hugging Face (Twitter)
RT @bdsqlsz: New: Some new image models will be released(open source) at the end of the month.😎
RT @bdsqlsz: New: Some new image models will be released(open source) at the end of the month.😎
Hugging Face (Twitter)
RT @abidlabs: Follow huggingface.co/xai-org to stay up to date. https://twitter.com/elonmusk/status/1959379349322313920#m
RT @abidlabs: Follow huggingface.co/xai-org to stay up to date. https://twitter.com/elonmusk/status/1959379349322313920#m
Hugging Face (Twitter)
RT @HaihaoShen: 🤔A more aggressive INT4 model for DeepSeek-V3.1:
https://huggingface.co/Intel/DeepSeek-V3.1-int4-AutoRound
#intel #autoround #huggingface @deepseek_ai
RT @HaihaoShen: 🤔A more aggressive INT4 model for DeepSeek-V3.1:
https://huggingface.co/Intel/DeepSeek-V3.1-int4-AutoRound
#intel #autoround #huggingface @deepseek_ai
huggingface.co
Intel/DeepSeek-V3.1-int4-AutoRound · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @QuanquanGu: So many multipliers! Great to see that Grok2 was trained using μP.
https://huggingface.co/xai-org/grok-2
RT @QuanquanGu: So many multipliers! Great to see that Grok2 was trained using μP.
https://huggingface.co/xai-org/grok-2
Hugging Face (Twitter)
RT @eliebakouch: Wow, pretty cool that they also open sourced a FSDP2 compatible Muon and PolyNorm working with @huggingface kernels! https://twitter.com/eliebakouch/status/1959598428192669870#m
RT @eliebakouch: Wow, pretty cool that they also open sourced a FSDP2 compatible Muon and PolyNorm working with @huggingface kernels! https://twitter.com/eliebakouch/status/1959598428192669870#m
Hugging Face (Twitter)
RT @scaling01: Grok-2 got open-sourced
same arch as grok-1
https://huggingface.co/xai-org/grok-2/
RT @scaling01: Grok-2 got open-sourced
same arch as grok-1
https://huggingface.co/xai-org/grok-2/
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @heyshrutimishra: Hugging Face quietly dropped FREE courses with certification
It cover everything from LLMs to diffusion models.
Here are the best ones you should bookmark today 🧵👇
RT @heyshrutimishra: Hugging Face quietly dropped FREE courses with certification
It cover everything from LLMs to diffusion models.
Here are the best ones you should bookmark today 🧵👇
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: Microsoft just released VibeVoice - 1.5B SoTA Text to Speech model - MIT Licensed 🔥
> It can generate up 90 minutes of audio
> Supports simultaneous generation of > 4 speakers
> Streaming and larger 7B model in-coming
> Capable of cross-lingual and singing synthesis
Love the expressiveness and the emotion control on the model! Kudos to Microsoft 🤗
RT @reach_vb: Microsoft just released VibeVoice - 1.5B SoTA Text to Speech model - MIT Licensed 🔥
> It can generate up 90 minutes of audio
> Supports simultaneous generation of > 4 speakers
> Streaming and larger 7B model in-coming
> Capable of cross-lingual and singing synthesis
Love the expressiveness and the emotion control on the model! Kudos to Microsoft 🤗