Hugging Face
110 subscribers
1.12K photos
345 videos
1.86K links
Download Telegram
Hugging Face (Twitter)

RT @LeRobotHF: 🚀 Big update for LeRobot!

We've launched a new plugin system to support third-party hardware. Now you can integrate any robot, camera, or teleoperator with a simple 'pip install', no need to modify the core library.

This makes open robotics development more extensible, scalable, and community-friendly.

Learn how to create your own plugin: https://huggingface.co/docs/lerobot/integrate_hardware#using-your-own-lerobot-devices-
Hugging Face (Twitter)

RT @Thom_Wolf: LeRobot becoming an easy-to-install alternative to ROS (Robot Operating System) https://twitter.com/LeRobotHF/status/1975930970575397332#m
Hugging Face (Twitter)

RT @mervenoyann: we're celebrating halloween with @togethercompute at @huggingface 🤗🎃

join us in this fine-tuning workshop at our Paris office 🇫🇷
we'll have speakers from Together and our own @SergioPaniego to talk about fine-tuning & alignment 🛠️

find detailed agenda on the next one ⤵️
Hugging Face (Twitter)

RT @reach_vb: The Hugging Face Hub team is on a tear recently:

> You can create custom apps with domains on spaces
> Edit GGUF metadata on the Fly
> 100% of the Hub is powered by Xet - faster, efficient
> Responses API support for ALL Inference Providers
> MCP-UI support for HF MCP Server
> Search papers based on the Org
> Showcase repository size on the UI

and a lot more - excited for the coming weeks/ months as we continue to improve the overall UX! 🤗
Hugging Face (Twitter)

RT @victormustar: Microsoft did something interesting here 👀

“Unlike typical LLMs that are trained to play the role of the "assistant" in conversation, we trained UserLM-8b to simulate the “user” role in conversation”

https://huggingface.co/microsoft/UserLM-8b
Hugging Face (Twitter)

RT @ClementDelangue: Refreshing to see @neuphonicspeech, a London-based seed startup that raised just a few million, top the most trending models on @huggingface today. They manage to stand out amongst 2M public models & giant corporations from the US and China.

Good example that everyone can contribute meaningfully to open-source (and get great visibility and credibility thanks to it) no matter their size, location or compute budgets. We need more of this!
Hugging Face (Twitter)

RT @ClementDelangue: So proud to see Reachy Mini named one of the Best Inventions of 2025 by @TIME!

Huge credit to the @pollenrobotics and @huggingface teams, turning a concept into thousands of units sold and shipped in under 6 months.

We might not be as slick as some other robotics companies (we sure don't do such good marketing videos and demos), but if we hit 100,000 Reachy Minis next year and 1 million by 2027, we’ll have a real shot at transforming robotics and AI through open-source and collaboration.

We’re just getting started 🦾🦾🦾
Hugging Face (Twitter)

RT @xeophon_: nvidia is the western qwen in terms of open releases but yall are not ready for this conversation
Hugging Face (Twitter)

RT @abacaj: Pretty bullish on LoRA fine tuning again. Idk if it’s because the models are so much better today that they adapt much more easily or what... someone should study this
Hugging Face (Twitter)

RT @mervenoyann: meet-up next month at @huggingface Paris office with our friends at @bfl_ml and @fal 🇫🇷🥖🤗

talks, networking, food, swag 🕺🏻are you in? 🤝
Hugging Face (Twitter)

RT @pollenrobotics: The first Reachy Mini units are on their way! 🚀

Our Community Beta Program is starting soon — selected testers will receive their robots to help us improve docs, software & explore new features.

Lite & Wireless versions ship around Dec 15!
Hugging Face (Twitter)

RT @ClementDelangue: It's easier than ever to train, optimize and run your own models thanks to open-source (versus delegating all learning, control, capabilities to black-box APIs).

Cool to see @karpathy proving it once more by leveraging @huggingface fineweb (https://huggingface.co/datasets/karpathy/fineweb-edu-100b-shuffle)! https://twitter.com/karpathy/status/1977755427569111362#m
Hugging Face (Twitter)

RT @BdsLoick: New blog post analyzing the top 50 entities with the most downloaded models on @huggingface 🤗!

The purpose here is to get an idea of the profile of the models with the greatest impact in open source (we are not interested in closed models here!).

Some key findings:
Hugging Face (Twitter)

RT @karpathy: Excited to release new repo: nanochat!
(it's among the most unhinged I've written).

Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI.

It weighs ~8,000 lines of imo quite clean code to:

- Train the tokenizer using a new Rust implementation
- Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics
- Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use.
- SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval)
- RL the model optionally on GSM8K with "GRPO"
- Efficient inference the model in an Engine with KV cache,...

Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @maximelabonne: New LFM2 release 🥳

It's a Japanese PII extractor with only 350M parameters.

It's extremely fast and on par with GPT-5 (!) in terms of quality.

Check it out, it's available today on @huggingface!
Hugging Face (Twitter)

RT @karpathy: @ClementDelangue @huggingface: Ty! huggingface work/infra/datasets are critical to projects like nanochat - to be accurate the source code of nanochat (e.g. at the $100 tier) is ~8KB of Python and ~30GB of fineweb/smoltalk.
Hugging Face (Twitter)

RT @vanstriendaniel: @nanonets just shipped Nanonets-OCR2: new 3B VLM for OCR!

LaTeX equations, tables, handwriting, charts, multilingual - it does it all!

You can try it against your data with one command via @huggingface Jobs - no local GPU needed!

The HF Jobs command/output from the model 👇