Hugging Face
70 subscribers
727 photos
252 videos
1.25K links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @pollenrobotics: 🖐️ 4 fingers, 8 degrees of freedom
🔩 Dual hobby servos per finger
🦴 Rigid "bones" with a soft TPU shell
🖨️ Fully 3D printable
⚖️ Weighs 400g and costs under €200

This is the "Amazing Hand". Check it out 👇

Try, tweak & share: https://huggingface.co/blog/pollen-robotics/amazing-hand
Hugging Face (Twitter)

RT @ErikKaum: We just released native support for @sgl_project and @vllm_project in Inference Endpoints 🔥

Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.

And that provides the managed infra for it so you can focus on your users.
Hugging Face (Twitter)

RT @pydantic: Pydantic AI now supports @huggingface as a provider!
You can use it to run open source models like DeepSeek R1 on scalable serverless infrastructure. They have a free tier allowance so you can test it out.

Thanks to the Hugging Face team (@hanouticelina ) for this great contribution.
Hugging Face (Twitter)

RT @ClementDelangue: It's so beautiful to see the @Kimi_Moonshot team participating in every single community discussions or pull requests on @huggingface (the little blue bubbles on the right).

In my opinion, every serious AI organization should dedicate meaningful time and ressources to this because that's how you build an engaged AI builder community!
Hugging Face (Twitter)

RT @reach_vb: You asked we delivered! Hugging Face Inference Providers is now fully OpenAI client compatible! 🔥

Simply append the provider name to the model ID

OpenAI client is arguably the most used client when it comes to LLMs, so getting this right is a big milestone for the team! 🤗
Hugging Face (Twitter)

RT @calebfahlgren: The @huggingface Inference Providers is getting even easier to use! Now with a unified OpenAI client route.

Just use the model id and it works. You can also set your preferred provider with `:groq` for example.

Here's how easy it is to use @GroqInc and Kimi K2
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @cline: 🤗🤗🤗
🤗❤️🤗 @huggingface & Cline = your LLM playground
🤗🤗🤗

You can access Kimi K2 & 6,140 (!) other open source models in Cline.
Hugging Face (Twitter)

RT @marimo_io: Announcing molab: a cloud-hosted marimo notebook workspace with link-based sharing.

Experiment on AI, ML and data using the world’s best Python (and SQL!) notebook.

Launching with examples from @huggingface, @weights_biases, and using @PyTorch

https://marimo.io/blog/announcing-molab
Hugging Face (Twitter)

RT @cline: Here's how you can use the @huggingface provider in Cline 🤗

(thread)
Hugging Face (Twitter)

RT @Wauplin: Big update: Hugging Face Inference Providers now work out of the box with the OpenAI client!

Just add the provider name to the model ID and you’re good to go: "moonshotai/Kimi-K2-Instruct:groq"
Hugging Face (Twitter)

RT @arcprize: ARC-AGI-3 Preview games need to be pressure tested. We’re hosting a 30-day agent competition in partnership with @huggingface

We’re calling on the community to build agents (and win money!)

https://arcprize.org/competitions/arc-agi-3-preview-agents/
Hugging Face (Twitter)

RT @NVIDIAAIDev: 📣 Announcing the release of OpenReasoning-Nemotron: a suite of reasoning-capable LLMs which have been distilled from the DeepSeek R1 0528 671B model. Trained on a massive, high-quality dataset distilled from the new DeepSeek R1 0528, our new 7B, 14B, and 32B models achieve SOTA perf on a wide range of reasoning benchmarks for their respective sizes in the domain of mathematics, science and code. The models are available on @huggingface🤗: nvda.ws/456WifL
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @hugobowne: Training big models used to be reserved for OpenAI or DeepMind.

Now? Builders everywhere have access to clusters of 4090s, Modal credits, and open-weight models like LLaMA 3 and Qwen. 🛠️

In this episode of @VanishingData, @TheZachMueller (@huggingface ), joins me to break down what scaling actually looks like in 2025 for individual devs and small teams:

• When to leave Colab and how not to drown in infra the moment you do
• How Accelerate simplifies training and inference across multiple GPUs
• Why “data parallelism” is just the start and where things break
• Lessons from helping everyone from solo devs to research labs scale up
• What people still get wrong about distributed training and inference

Links in 🧵

1/
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @NVIDIAAIDev: 🎶 Meet Audio-Flamingo 3 – a fully open LALM trained on sound, speech, and music datasets. 🎶

Handles 10-min audio, long-form text, and voice conversations. Perfect for audio QA, dialog, and reasoning.

On @huggingface ➡️ https://huggingface.co/nvidia/audio-flamingo-3

From #NVIDIAResearch.
Hugging Face (Twitter)

RT @reach_vb: Qwen COOKED - beats Kimi K2 and competitive to Claude Opus 4 at 25% total parameters 🤯
Hugging Face (Twitter)

RT @reach_vb: missed this, @NVIDIAAIDev silently dropped Open Reasoning Nemotron models (1.5-32B), SoTA on LiveCodeBench, CC-BY 4.0 licensed 🔥

> 32B competing with Qwen3 235B and DeepSeek R1
> Available across 1.5B, 7B, 14B and 32B size
> Supports upto 64K output tokens
> Utilises GenSelect (combines multiple parallel generations)
> Built on top of Qwen 2.5 series
> Allows commercial usage

Works out of the box in transformers, vllm, mlx, llama.cpp and more!
Hugging Face (Twitter)

RT @lhoestq: A new Pandas feature landed 3 days ago and no one noticed.

Upload ONLY THE NEW DATA to dedupe-based storage like @huggingface (Xet). Data that already exist in other files don't need to be uploaded.

Possible thanks to the recent addition of Content Defined Chunking for Parquet.
Hugging Face (Twitter)

RT @casper_hansen_: This is not a SMALL update. This is huge! Give us this for every model please Qwen team🙏