Hugging Face (Twitter)
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
RT @nic_o_martin: Looks like my first day at @huggingface will mainly consist of traveling. Soon in Stockholm and ready for @nordicjs 😍
Hugging Face (Twitter)
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
RT @LucSGeorges: How does picklescan work? 🤓
Well first we need to understand why pickle is dangerous: at its core a pickle is a sequence of opcodes interpreted by a form of virtual machine — already sounds fishy, doesn’t it?
Hugging Face (Twitter)
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
RT @TencentHunyuan: We just hit the top of the Hugging Face trend list with two models! 🏆
🔹HunyuanImage 3.0: The largest and most powerful open-source text-to-image model to date with over 80 billion parameters. The performance is comparable to industry flagship closed-source models.
🔹Hunyuan3D-Part: This open-source part-level 3D shape generation model packing key features like P3-SAM, the industry's first native 3D part segmentation, and X-Part, which delivers SOTA controllability and shape quality.
Stop waiting and start building with these powerful models—both are FREE to deploy now!
Try them now:
HunyuanImage 3.0: hunyuan.tencent.com/image
Hunyuan3D-Part: https://3d.hunyuan.tencent.com/studio
Hugging Face (Twitter)
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
RT @abidlabs: If you are a software engineer who is currently using closed models, what's the biggest obstacle to using open-source models instead?
Hugging Face (Twitter)
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
RT @ClementDelangue: Time to fine-tune your own models instead of relying on blackbox closed-source models!
Not doing this is like building a software company and not writing your own software.
In the time of reinforcement learning, it's become much easier and cheaper than it used to thanks to great open-source models & more needed than ever to start your AI learning curve, differentiate yourself, and create better products for your users and customers.
Great to see @thinkymachines contributing to this trend! In my opinion, even if it's been slower to happen than we expected, long-term that's where most of the value will be. https://twitter.com/thinkymachines/status/1973447428977336578#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
RT @LysandreJik: ServiceNow-AI/Apriel-1.5-15b-Thinker running on a single GPU using `transformers serve` 🔥
great to have some very nice reasoning models that can run locally! next step, trying it on mps 👀
Media is too big
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
RT @maximelabonne: LFM2-Audio just dropped!
It's a 1.5B model that understands and generates both text and audio
Inference 10x faster + quality on par with models 10x larger
Available today on @huggingface and our playground 🥳
Hugging Face (Twitter)
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m
RT @reach_vb: 32B-3B, Multilingual, Tool Calling, Long Context - all with Apache 2.0 license 🔥 https://twitter.com/reach_vb/status/1973736685755388314#m
Hugging Face (Twitter)
RT @ArtificialAnlys: IBM has launched Granite 4.0 - a new family of open weights language models ranging in size from 3B to 32B. Artificial Analysis was provided pre-release access, and our benchmarking shows Granite 4.0 H Small (32B/9B total/active parameters) scoring an Intelligence Index of 23, with a particular strength in token efficiency
Today IBM released four new models: Granite 4.0 H Small (32B/9B total/active parameters), Granite 4.0 H Tiny (7B/1B), Granite 4.0 H Micro (3B/3B) and Granite 4.0 Micro (3B/3B). We evaluated Granite 4.0 Small (in non-reasoning mode) and Granite 4.0 Micro using the Artificial Analysis Intelligence Index. Granite 4.0 models combine a small amount of standard transformer-style attention layers with a majority of Mamba layers which claims to reduce memory requirements without impacting performance
Key benchmarking takeaways:
➤🧠 Granite 4.0 H Small Intelligence: In non-reasoning, Granite 4.0 H Small scores 23 on the...
Перейти на оригинальный пост
RT @ArtificialAnlys: IBM has launched Granite 4.0 - a new family of open weights language models ranging in size from 3B to 32B. Artificial Analysis was provided pre-release access, and our benchmarking shows Granite 4.0 H Small (32B/9B total/active parameters) scoring an Intelligence Index of 23, with a particular strength in token efficiency
Today IBM released four new models: Granite 4.0 H Small (32B/9B total/active parameters), Granite 4.0 H Tiny (7B/1B), Granite 4.0 H Micro (3B/3B) and Granite 4.0 Micro (3B/3B). We evaluated Granite 4.0 Small (in non-reasoning mode) and Granite 4.0 Micro using the Artificial Analysis Intelligence Index. Granite 4.0 models combine a small amount of standard transformer-style attention layers with a majority of Mamba layers which claims to reduce memory requirements without impacting performance
Key benchmarking takeaways:
➤🧠 Granite 4.0 H Small Intelligence: In non-reasoning, Granite 4.0 H Small scores 23 on the...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @ClementDelangue: IBM is back! They just joined Hugging Face Enterprise & released Granite 4.0 in open-source with a new hybrid Mamba/transformer architecture that reduces memory requirements without reducing accuracy much.
This set of models is great for agentic workflows like tool calling, document analysis, RAG, especially in an enterprise setup 🚀
The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered by 🤗 TransformersJS!
3B dense hybrid: https://huggingface.co/ibm-granite/granite-4.0-micro
3B MoE with 1B active: https://huggingface.co/ibm-granite/granite-4.0-h-small-base
32B MoE with 9B active: https://huggingface.co/ibm-granite/granite-4.0-h-small
🗂️ Full Model collection: https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c
🔗 In-browser demo: https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU
RT @ClementDelangue: IBM is back! They just joined Hugging Face Enterprise & released Granite 4.0 in open-source with a new hybrid Mamba/transformer architecture that reduces memory requirements without reducing accuracy much.
This set of models is great for agentic workflows like tool calling, document analysis, RAG, especially in an enterprise setup 🚀
The "Micro" (3.4B) model can even run 100% locally in your browser on WebGPU, powered by 🤗 TransformersJS!
3B dense hybrid: https://huggingface.co/ibm-granite/granite-4.0-micro
3B MoE with 1B active: https://huggingface.co/ibm-granite/granite-4.0-h-small-base
32B MoE with 9B active: https://huggingface.co/ibm-granite/granite-4.0-h-small
🗂️ Full Model collection: https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c
🔗 In-browser demo: https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @victormustar: another open source win:
opencode + GLM 4.6 is basically Claude Code (used it all day) but insanely cheap + better TUI. And you can use it with your Hugging Face token now 🔥 https://twitter.com/victormustar/status/1935285458394583356#m
RT @victormustar: another open source win:
opencode + GLM 4.6 is basically Claude Code (used it all day) but insanely cheap + better TUI. And you can use it with your Hugging Face token now 🔥 https://twitter.com/victormustar/status/1935285458394583356#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @VoyageAI: To evaluate embeddings and retrieval, we need more benchmarks beyond MTEB that are less vulnerable to overfitting. That’s why RTEB was just beta-launched!
⚖️ Both open and held-out datasets to prevent overfitting to evaluation sets.
🌍 Realistic datasets from critical enterprise domains like law, healthcare, code, and finance.
🔎 Only focus on retrieval applications with relevant large-scale datasets.
Check out the blog and leaderboard on @huggingface and join the community in building a stronger, more reliable benchmark.
Blog: mongodb.social/6013Ai5sz
RT @VoyageAI: To evaluate embeddings and retrieval, we need more benchmarks beyond MTEB that are less vulnerable to overfitting. That’s why RTEB was just beta-launched!
⚖️ Both open and held-out datasets to prevent overfitting to evaluation sets.
🌍 Realistic datasets from critical enterprise domains like law, healthcare, code, and finance.
🔎 Only focus on retrieval applications with relevant large-scale datasets.
Check out the blog and leaderboard on @huggingface and join the community in building a stronger, more reliable benchmark.
Blog: mongodb.social/6013Ai5sz
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
What other features would you like to see in 𝚝𝚛𝚊𝚌𝚔𝚒𝚘, our experiment tracking library? https://twitter.com/TrackioApp/status/1973834043210018828#m
What other features would you like to see in 𝚝𝚛𝚊𝚌𝚔𝚒𝚘, our experiment tracking library? https://twitter.com/TrackioApp/status/1973834043210018828#m
Hugging Face (Twitter)
RT @reach_vb: Pretty cool to see a MIT licensed 15B model competing w/ DeepSeek R1 - how are the vibes? 👀
RT @reach_vb: Pretty cool to see a MIT licensed 15B model competing w/ DeepSeek R1 - how are the vibes? 👀
Hugging Face (Twitter)
RT @ClementDelangue: 🦾Great📷 milestone for open-source robotics: pi0 & pi0.5 by @physical_int are now on @huggingface, fully ported to PyTorch in @LeRobotHF and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots!
As described by Physical Intelligence, π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization.
While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
Generalization must occur at multiple levels:
- Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools...
Перейти на оригинальный пост
RT @ClementDelangue: 🦾Great📷 milestone for open-source robotics: pi0 & pi0.5 by @physical_int are now on @huggingface, fully ported to PyTorch in @LeRobotHF and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots!
As described by Physical Intelligence, π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization.
While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
Generalization must occur at multiple levels:
- Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @calebfahlgren: .@pmarca: "My guess is we are going to live in a world in which most aggregate AI is going to be executed probably on smaller form factors and probably most of that is going to be open source" https://twitter.com/collision/status/1973473479061278737#m
RT @calebfahlgren: .@pmarca: "My guess is we are going to live in a world in which most aggregate AI is going to be executed probably on smaller form factors and probably most of that is going to be open source" https://twitter.com/collision/status/1973473479061278737#m
Hugging Face (Twitter)
RT @MaziyarPanahi: just hit 4k followers on @huggingface! 🤗
couldn’t have done it without the incredible open-source AI community 💜
Grateful for your trust, support, and collaboration.
RT @MaziyarPanahi: just hit 4k followers on @huggingface! 🤗
couldn’t have done it without the incredible open-source AI community 💜
Grateful for your trust, support, and collaboration.