Hugging Face (Twitter)
RT @vanstriendaniel: HF Jobs just launched! π
One command VLM based OCR with uv Scripts:
hf jobs uv run [script] ufo-images ufo-text
Classified UFO docs β clean markdown. Zero setup!
Try it β huggingface.co/uv-scripts
RT @vanstriendaniel: HF Jobs just launched! π
One command VLM based OCR with uv Scripts:
hf jobs uv run [script] ufo-images ufo-text
Classified UFO docs β clean markdown. Zero setup!
Try it β huggingface.co/uv-scripts
Hugging Face (Twitter)
RT @charliermarsh: The new Hugging Face jobs CLI is powered by uv π€
You can use `hf jobs uv run` to initiate a job from a standalone Python script.
RT @charliermarsh: The new Hugging Face jobs CLI is powered by uv π€
You can use `hf jobs uv run` to initiate a job from a standalone Python script.
βHugging Face (Twitter)
RT @business: Zhipu is releasing its biggest open-source model to date, joining a growing number of Chinese firms ramping up their free artificial intelligence offerings
RT @business: Zhipu is releasing its biggest open-source model to date, joining a growing number of Chinese firms ramping up their free artificial intelligence offerings
Bloomberg.com
Chinese OpenAI Challenger Zhipu to Unveil New Open-Source Model
Zhipu is releasing its biggest open-source model to date, joining a growing number of Chinese firms ramping up their free artificial intelligence offerings.
Hugging Face (Twitter)
RT @ClementDelangue: The GSPO paper by @Alibaba_Qwen is already the third most popular one on @huggingface for the month of July.
I suspect this will have a massive impact on the field! https://huggingface.co/papers/month/2025-07
Also, let's get back to celebrate research papers as massive contributions to the field?
RT @ClementDelangue: The GSPO paper by @Alibaba_Qwen is already the third most popular one on @huggingface for the month of July.
I suspect this will have a massive impact on the field! https://huggingface.co/papers/month/2025-07
Also, let's get back to celebrate research papers as massive contributions to the field?
Hugging Face (Twitter)
RT @ClementDelangue: We thought we would get xAI open-source but got zAI so even better π π π
RT @ClementDelangue: We thought we would get xAI open-source but got zAI so even better π π π
Hugging Face (Twitter)
RT @roo_code: Roo Code now supports @huggingfaceπ€
Fast config. No extra hosting. And the ability to bring a whopping 91 models directly into your editor. Try it now!
RT @roo_code: Roo Code now supports @huggingfaceπ€
Fast config. No extra hosting. And the ability to bring a whopping 91 models directly into your editor. Try it now!
βHugging Face (Twitter)
RT @ivanfioravanti: GLM-4.5-Air-3bit for anybody out there with a Mac with 64GB that wants to try it, while DWQ is cooking π₯
https://huggingface.co/mlx-community/GLM-4.5-Air-3bit
RT @ivanfioravanti: GLM-4.5-Air-3bit for anybody out there with a Mac with 64GB that wants to try it, while DWQ is cooking π₯
https://huggingface.co/mlx-community/GLM-4.5-Air-3bit
Hugging Face (Twitter)
RT @ClementDelangue: How much are you using @huggingface's CLI? Mostly to upload and download models and datasets?
We just revamped it (welcome to `hf`!) and added the capability to run jobs directly on our infra. Useful?
RT @ClementDelangue: How much are you using @huggingface's CLI? Mostly to upload and download models and datasets?
We just revamped it (welcome to `hf`!) and added the capability to run jobs directly on our infra. Useful?
Hugging Face (Twitter)
RT @HuggingPapers: TencentARC unveils ARC-Hunyuan-Video-7B on Hugging Face.
A compact 7B multimodal model designed for deep, structured comprehension of real-world short videos, processing visual, audio, & text signals end-to-end.
RT @HuggingPapers: TencentARC unveils ARC-Hunyuan-Video-7B on Hugging Face.
A compact 7B multimodal model designed for deep, structured comprehension of real-world short videos, processing visual, audio, & text signals end-to-end.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Hesamation: a few months ago i shared this interactive blog post βLLM embeddings explainedβ on @huggingface and it gives me chills that people have actually found it helpful.
yesterday someone posted about it on LinkedIn, made me think about it after a while!
RT @Hesamation: a few months ago i shared this interactive blog post βLLM embeddings explainedβ on @huggingface and it gives me chills that people have actually found it helpful.
yesterday someone posted about it on LinkedIn, made me think about it after a while!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: BOOM! Latest Qwen 30B A3B 2507 running blazingly fast on Mac powered by MLX π₯
mlx_lm.chat --model "lmstudio-community/Qwen3-30B-A3B-Instruct-2507-MLX-4bit"
That's it, try it out today!
RT @reach_vb: BOOM! Latest Qwen 30B A3B 2507 running blazingly fast on Mac powered by MLX π₯
mlx_lm.chat --model "lmstudio-community/Qwen3-30B-A3B-Instruct-2507-MLX-4bit"
That's it, try it out today!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @roo_code: Got a favorite @huggingface model? Now it lives in your editor. π€
Roo Code makes it easy to connect your API key, choose from 90+ models, and select your preferred inference provider in just a few clicks.
Watch the quick tutorial and explore more: https://docs.roocode.com/providers/huggingface
RT @roo_code: Got a favorite @huggingface model? Now it lives in your editor. π€
Roo Code makes it easy to connect your API key, choose from 90+ models, and select your preferred inference provider in just a few clicks.
Watch the quick tutorial and explore more: https://docs.roocode.com/providers/huggingface
Hugging Face (Twitter)
RT @vanstriendaniel: I just processed 1000s of prompts using Qwen3-235B-A22B-Instruct-2507 across 4 GPUs!
How? Everyone plays their part:
@astral_sh UV handles dependencies
@huggingface Jobs handles GPUs
@Alibaba_Qwen handles the model
@vllm_project handles inference
One command. Zero complexity!
RT @vanstriendaniel: I just processed 1000s of prompts using Qwen3-235B-A22B-Instruct-2507 across 4 GPUs!
How? Everyone plays their part:
@astral_sh UV handles dependencies
@huggingface Jobs handles GPUs
@Alibaba_Qwen handles the model
@vllm_project handles inference
One command. Zero complexity!
Hugging Face (Twitter)
RT @lhoestq: > hf jobs is just out and damnnnn I love the uv integration π
@huggingface made their scripts uv-ready to run them on HF infra without setting up docker or dependencies.
E.g.
run DPO locally > uv run dpoβ€py
run DPO on HF > hf jobs uv run dpoβ€py
Bonus: --flavor for GPUsπ₯
RT @lhoestq: > hf jobs is just out and damnnnn I love the uv integration π
@huggingface made their scripts uv-ready to run them on HF infra without setting up docker or dependencies.
E.g.
run DPO locally > uv run dpoβ€py
run DPO on HF > hf jobs uv run dpoβ€py
Bonus: --flavor for GPUsπ₯
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @NielsRogge: Efficient LoFTR was just integrated into @huggingface Transformers!
It improves upon LoFTR, a detector-free image matcher, by being 2.5x faster. It can even surpass the SOTA efficient sparse matching pipeline SuperPoint + LightGlue.
Now available in a few lines of code!
RT @NielsRogge: Efficient LoFTR was just integrated into @huggingface Transformers!
It improves upon LoFTR, a detector-free image matcher, by being 2.5x faster. It can even surpass the SOTA efficient sparse matching pipeline SuperPoint + LightGlue.
Now available in a few lines of code!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jandotai: Jan v0.6.6 is out: Jan now runs fully on llama.cpp.
- Cortex is gone, local models now run on @ggerganov's llama.cpp
- Toggle between llama.cpp builds
- @huggingface added as a model provider
- Hub enhanced
- Images from MCPs render inline in chat
Update Jan or grab the latest.
RT @jandotai: Jan v0.6.6 is out: Jan now runs fully on llama.cpp.
- Cortex is gone, local models now run on @ggerganov's llama.cpp
- Toggle between llama.cpp builds
- @huggingface added as a model provider
- Hub enhanced
- Images from MCPs render inline in chat
Update Jan or grab the latest.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @sleenyre: Happy to have released my first model. A lot of work went into making Krea 1 open source. https://twitter.com/krea_ai/status/1950921488871408075#m
RT @sleenyre: Happy to have released my first model. A lot of work went into making Krea 1 open source. https://twitter.com/krea_ai/status/1950921488871408075#m
βHugging Face (Twitter)
RT @NVSWSourcer: π We just opened over 26M lines of synthetic data that was used to train the Llama Nemotron Super v1.5 model.
π’ Find them on Hugging Face π€ bit.ly/4l8DIc7
RT @NVSWSourcer: π We just opened over 26M lines of synthetic data that was used to train the Llama Nemotron Super v1.5 model.
π’ Find them on Hugging Face π€ bit.ly/4l8DIc7
huggingface.co
nvidia/Nemotron-Post-Training-Dataset-v1 Β· Datasets at Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @NVIDIAAIDev: π We just opened over 26M lines of synthetic data that was used to train the Llama Nemotron Super v1.5 model.
π This transparency into our model training also helps you build your own models -- without expending the effort and time required to produce your own datasets.
π’ Find them on @HuggingFace π€ https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1
RT @NVIDIAAIDev: π We just opened over 26M lines of synthetic data that was used to train the Llama Nemotron Super v1.5 model.
π This transparency into our model training also helps you build your own models -- without expending the effort and time required to produce your own datasets.
π’ Find them on @HuggingFace π€ https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v1