Hugging Face (Twitter)
RT @MaziyarPanahi: π Big news in healthcare AI! I'm thrilled to announce the launch of OpenMed on @huggingface, releasing 380+ state-of-the-art medical NER models for free under Apache 2.0.
And this is just the beginning! π§΅
RT @MaziyarPanahi: π Big news in healthcare AI! I'm thrilled to announce the launch of OpenMed on @huggingface, releasing 380+ state-of-the-art medical NER models for free under Apache 2.0.
And this is just the beginning! π§΅
Hugging Face (Twitter)
RT @togethercompute: Most AI benchmarks test the past.
But real intelligence is about predicting the future.
Introducing FutureBench β a new benchmark for evaluating agents on real forecasting tasks that we developed with @huggingface
π Reasoning > memorization
π Real-world events
π§ Dynamic, verifiable outcomes
Read more (link below)
RT @togethercompute: Most AI benchmarks test the past.
But real intelligence is about predicting the future.
Introducing FutureBench β a new benchmark for evaluating agents on real forecasting tasks that we developed with @huggingface
π Reasoning > memorization
π Real-world events
π§ Dynamic, verifiable outcomes
Read more (link below)
Hugging Face (Twitter)
RT @abidlabs: More than 2200 open-source MCP servers!
https://huggingface.co/spaces?filter=mcp-server
RT @abidlabs: More than 2200 open-source MCP servers!
https://huggingface.co/spaces?filter=mcp-server
Hugging Face (Twitter)
RT @DataScienceHarp: Had an awesome time visiting @huggingface office in Paris! Thank you @mervenoyann for the invite. Good you finally meet you and the legend @reach_vb in person. Looking forward to the next time. Cheers!
RT @DataScienceHarp: Had an awesome time visiting @huggingface office in Paris! Thank you @mervenoyann for the invite. Good you finally meet you and the legend @reach_vb in person. Looking forward to the next time. Cheers!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @pollenrobotics: ποΈ 4 fingers, 8 degrees of freedom
π© Dual hobby servos per finger
𦴠Rigid "bones" with a soft TPU shell
π¨οΈ Fully 3D printable
βοΈ Weighs 400g and costs under β¬200
This is the "Amazing Hand". Check it out π
Try, tweak & share: https://huggingface.co/blog/pollen-robotics/amazing-hand
RT @pollenrobotics: ποΈ 4 fingers, 8 degrees of freedom
π© Dual hobby servos per finger
𦴠Rigid "bones" with a soft TPU shell
π¨οΈ Fully 3D printable
βοΈ Weighs 400g and costs under β¬200
This is the "Amazing Hand". Check it out π
Try, tweak & share: https://huggingface.co/blog/pollen-robotics/amazing-hand
Hugging Face (Twitter)
RT @ErikKaum: We just released native support for @sgl_project and @vllm_project in Inference Endpoints π₯
Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.
And that provides the managed infra for it so you can focus on your users.
RT @ErikKaum: We just released native support for @sgl_project and @vllm_project in Inference Endpoints π₯
Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.
And that provides the managed infra for it so you can focus on your users.
Hugging Face (Twitter)
RT @pydantic: Pydantic AI now supports @huggingface as a provider!
You can use it to run open source models like DeepSeek R1 on scalable serverless infrastructure. They have a free tier allowance so you can test it out.
Thanks to the Hugging Face team (@hanouticelina ) for this great contribution.
RT @pydantic: Pydantic AI now supports @huggingface as a provider!
You can use it to run open source models like DeepSeek R1 on scalable serverless infrastructure. They have a free tier allowance so you can test it out.
Thanks to the Hugging Face team (@hanouticelina ) for this great contribution.
Hugging Face (Twitter)
RT @ClementDelangue: It's so beautiful to see the @Kimi_Moonshot team participating in every single community discussions or pull requests on @huggingface (the little blue bubbles on the right).
In my opinion, every serious AI organization should dedicate meaningful time and ressources to this because that's how you build an engaged AI builder community!
RT @ClementDelangue: It's so beautiful to see the @Kimi_Moonshot team participating in every single community discussions or pull requests on @huggingface (the little blue bubbles on the right).
In my opinion, every serious AI organization should dedicate meaningful time and ressources to this because that's how you build an engaged AI builder community!
Hugging Face (Twitter)
RT @reach_vb: You asked we delivered! Hugging Face Inference Providers is now fully OpenAI client compatible! π₯
Simply append the provider name to the model ID
OpenAI client is arguably the most used client when it comes to LLMs, so getting this right is a big milestone for the team! π€
RT @reach_vb: You asked we delivered! Hugging Face Inference Providers is now fully OpenAI client compatible! π₯
Simply append the provider name to the model ID
OpenAI client is arguably the most used client when it comes to LLMs, so getting this right is a big milestone for the team! π€
Hugging Face (Twitter)
RT @calebfahlgren: The @huggingface Inference Providers is getting even easier to use! Now with a unified OpenAI client route.
Just use the model id and it works. You can also set your preferred provider with `:groq` for example.
Here's how easy it is to use @GroqInc and Kimi K2
RT @calebfahlgren: The @huggingface Inference Providers is getting even easier to use! Now with a unified OpenAI client route.
Just use the model id and it works. You can also set your preferred provider with `:groq` for example.
Here's how easy it is to use @GroqInc and Kimi K2
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @cline: π€π€π€
π€β€οΈπ€ @huggingface & Cline = your LLM playground
π€π€π€
You can access Kimi K2 & 6,140 (!) other open source models in Cline.
RT @cline: π€π€π€
π€β€οΈπ€ @huggingface & Cline = your LLM playground
π€π€π€
You can access Kimi K2 & 6,140 (!) other open source models in Cline.
βHugging Face (Twitter)
RT @marimo_io: Announcing molab: a cloud-hosted marimo notebook workspace with link-based sharing.
Experiment on AI, ML and data using the worldβs best Python (and SQL!) notebook.
Launching with examples from @huggingface, @weights_biases, and using @PyTorch
https://marimo.io/blog/announcing-molab
RT @marimo_io: Announcing molab: a cloud-hosted marimo notebook workspace with link-based sharing.
Experiment on AI, ML and data using the worldβs best Python (and SQL!) notebook.
Launching with examples from @huggingface, @weights_biases, and using @PyTorch
https://marimo.io/blog/announcing-molab
marimo.io
Announcing molab
Cloud-based notebooks for our community
Hugging Face (Twitter)
RT @cline: Here's how you can use the @huggingface provider in Cline π€
(thread)
RT @cline: Here's how you can use the @huggingface provider in Cline π€
(thread)
Hugging Face (Twitter)
RT @Wauplin: Big update: Hugging Face Inference Providers now work out of the box with the OpenAI client!
Just add the provider name to the model ID and youβre good to go: "moonshotai/Kimi-K2-Instruct:groq"
RT @Wauplin: Big update: Hugging Face Inference Providers now work out of the box with the OpenAI client!
Just add the provider name to the model ID and youβre good to go: "moonshotai/Kimi-K2-Instruct:groq"
Hugging Face (Twitter)
RT @arcprize: ARC-AGI-3 Preview games need to be pressure tested. Weβre hosting a 30-day agent competition in partnership with @huggingface
Weβre calling on the community to build agents (and win money!)
https://arcprize.org/competitions/arc-agi-3-preview-agents/
RT @arcprize: ARC-AGI-3 Preview games need to be pressure tested. Weβre hosting a 30-day agent competition in partnership with @huggingface
Weβre calling on the community to build agents (and win money!)
https://arcprize.org/competitions/arc-agi-3-preview-agents/
Hugging Face (Twitter)
RT @NVIDIAAIDev: π£ Announcing the release of OpenReasoning-Nemotron: a suite of reasoning-capable LLMs which have been distilled from the DeepSeek R1 0528 671B model. Trained on a massive, high-quality dataset distilled from the new DeepSeek R1 0528, our new 7B, 14B, and 32B models achieve SOTA perf on a wide range of reasoning benchmarks for their respective sizes in the domain of mathematics, science and code. The models are available on @huggingfaceπ€: nvda.ws/456WifL
RT @NVIDIAAIDev: π£ Announcing the release of OpenReasoning-Nemotron: a suite of reasoning-capable LLMs which have been distilled from the DeepSeek R1 0528 671B model. Trained on a massive, high-quality dataset distilled from the new DeepSeek R1 0528, our new 7B, 14B, and 32B models achieve SOTA perf on a wide range of reasoning benchmarks for their respective sizes in the domain of mathematics, science and code. The models are available on @huggingfaceπ€: nvda.ws/456WifL