Hugging Face
59 subscribers
599 photos
218 videos
1.04K links
Download Telegram
Hugging Face (Twitter)

RT @ClementDelangue: The GSPO paper by @Alibaba_Qwen is already the third most popular one on @huggingface for the month of July.

I suspect this will have a massive impact on the field! https://huggingface.co/papers/month/2025-07

Also, let's get back to celebrate research papers as massive contributions to the field?
Hugging Face (Twitter)

RT @ClementDelangue: We thought we would get xAI open-source but got zAI so even better πŸ˜…πŸ˜…πŸ˜…
Hugging Face (Twitter)

RT @roo_code: Roo Code now supports @huggingfaceπŸ€—

Fast config. No extra hosting. And the ability to bring a whopping 91 models directly into your editor. Try it now!
β€ŒHugging Face (Twitter)

RT @ivanfioravanti: GLM-4.5-Air-3bit for anybody out there with a Mac with 64GB that wants to try it, while DWQ is cooking πŸ”₯

https://huggingface.co/mlx-community/GLM-4.5-Air-3bit
Hugging Face (Twitter)

RT @ClementDelangue: How much are you using @huggingface's CLI? Mostly to upload and download models and datasets?

We just revamped it (welcome to `hf`!) and added the capability to run jobs directly on our infra. Useful?
Hugging Face (Twitter)

RT @HuggingPapers: TencentARC unveils ARC-Hunyuan-Video-7B on Hugging Face.

A compact 7B multimodal model designed for deep, structured comprehension of real-world short videos, processing visual, audio, & text signals end-to-end.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @Hesamation: a few months ago i shared this interactive blog post β€œLLM embeddings explained” on @huggingface and it gives me chills that people have actually found it helpful.

yesterday someone posted about it on LinkedIn, made me think about it after a while!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: BOOM! Latest Qwen 30B A3B 2507 running blazingly fast on Mac powered by MLX πŸ’₯

mlx_lm.chat --model "lmstudio-community/Qwen3-30B-A3B-Instruct-2507-MLX-4bit"

That's it, try it out today!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @roo_code: Got a favorite @huggingface model? Now it lives in your editor. πŸ€—

Roo Code makes it easy to connect your API key, choose from 90+ models, and select your preferred inference provider in just a few clicks.

Watch the quick tutorial and explore more: https://docs.roocode.com/providers/huggingface
Hugging Face (Twitter)

RT @vanstriendaniel: I just processed 1000s of prompts using Qwen3-235B-A22B-Instruct-2507 across 4 GPUs!

How? Everyone plays their part:
@astral_sh UV handles dependencies
@huggingface Jobs handles GPUs
@Alibaba_Qwen handles the model
@vllm_project handles inference

One command. Zero complexity!
Hugging Face (Twitter)

RT @lhoestq: > hf jobs is just out and damnnnn I love the uv integration πŸ’›

@huggingface made their scripts uv-ready to run them on HF infra without setting up docker or dependencies.

E.g.
run DPO locally > uv run dpoβ€€py
run DPO on HF > hf jobs uv run dpoβ€€py

Bonus: --flavor for GPUsπŸ”₯
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @NielsRogge: Efficient LoFTR was just integrated into @huggingface Transformers!

It improves upon LoFTR, a detector-free image matcher, by being 2.5x faster. It can even surpass the SOTA efficient sparse matching pipeline SuperPoint + LightGlue.

Now available in a few lines of code!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @jandotai: Jan v0.6.6 is out: Jan now runs fully on llama.cpp.

- Cortex is gone, local models now run on @ggerganov's llama.cpp
- Toggle between llama.cpp builds
- @huggingface added as a model provider
- Hub enhanced
- Images from MCPs render inline in chat

Update Jan or grab the latest.
Hugging Face (Twitter)

RT @NVIDIAAIDev: πŸ‘€ We just opened over 26M lines of synthetic data that was used to train the Llama Nemotron Super v1.5 model.

πŸ”Ž This transparency into our model training also helps you build your own models -- without expending the effort and time required to produce your own datasets.

πŸ”’ Find them on @HuggingFace πŸ€— https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1
Hugging Face (Twitter)

RT @ClementDelangue: If you're a researcher or engineer releasing open science papers & open models and datasets, I bow to you πŸ™‡πŸ™‡πŸ™‡

From what I'm hearing, doing so, especially in US big tech, often means fighting your manager and colleagues, going through countless legal meetings, threatening to quit or taking a lower paycheck, and sometimes the result is only that you'll get scolded when what you shared is used by competitors.

But, please remember: research papers and open models and datasets is how progress happens! Your efforts are pushing AI toward a more open and collaborative future. Thanks to openness, your research or models get a chance to be noticed, seen & built upon by people you respect to accelerate progress, grow your network & accelerate your impact.

It might be tough right now but open science will ultimately prevail as it always did! The researchers & engineers that we'll remember in ten years are the ones who share what they build, not the ones that keep it behind closed-doors for company profit maximization.

Please keep fighting for openness. We see you and we thank you! πŸ’šπŸ’› πŸ’™πŸ’œ
Hugging Face (Twitter)

RT @Xianbao_QIAN: Step 3 has just been released. It proposed a new infra level optimization of Attention, FFN disaggregation.

Model & Infra co-design is the way forward!

Model: https://huggingface.co/stepfun-ai/step3
Technical paper: arxiv.org/abs/2507.19427
Hugging Face (Twitter)

RT @victormustar: Black Forest Labs did a great job here, really like the vibe of the outputs here.

πŸ‘‡free demo is available on Hugging Face https://twitter.com/bfl_ml/status/1950920537741336801#m