Hugging Face
59 subscribers
596 photos
218 videos
1.04K links
Download Telegram
Hugging Face (Twitter)

RT @HuggingPapers: StepFun just released Step-3 on Hugging Face!

It's a new 321B-parameter VLM that's "Large yet Affordable," co-designed for cost-effective decoding.

Achieves unprecedented efficiency, setting a new Pareto frontier for LLM inference.
Hugging Face (Twitter)

RT @NielsRogge: StepFun quitely dropped a 321B parameter VLM on @huggingface.. trained on Hopper GPUs similar to DeepSeek except more efficient

@StepFun_ai is yet-another-Chinese AI player besides @deepseek_ai, @Alibaba_Qwen, @Kimi_Moonshot, @MiniMax__AI, @TencentHunyuan and @Zai_org https://twitter.com/HuggingPapers/status/1952038716488208409#m
Hugging Face (Twitter)

RT @Alibaba_Qwen: 🚀 Meet Qwen-Image — a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source.

🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully integrated
🔹 Bilingual support, diverse fonts, complex layouts

🎨 Also excels at general image generation — from photorealistic to anime, impressionist to minimalist. A true creative powerhouse.

Blog:https://qwenlm.github.io/blog/qwen-image/
Hugging Face:https://huggingface.co/Qwen/Qwen-Image
ModelScope:https://modelscope.cn/models/Qwen/Qwen-Image
Github:github.com/QwenLM/Qwen-Image
Technical report:https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf
Demo: https://modelscope.cn/aigc/imageGeneration?tab=advanced
Hugging Face (Twitter)

RT @_fracapuano: We shipped @LeRobotHF to its first major release, on Pypi and GitHub.

Alongside the team at @huggingface we’re making robotics more accessible, collaborative, and we hope this release makes contributing easier and better.

Links in 🧵
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @jandotai: Hugging Face 🤝 Jan

You can now use Hugging Face as a remote model provider in Jan.

Go to Settings -> Model Providers -> add your Hugging Face API key. Then open a new chat and pick a model from @huggingface.

Works with any model in Hugging Face in Jan.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @abidlabs: New Gradio component: 🥳 gr.Dialogue:

• As an output, it can be used to show diarized speech transcription
• As input, it's perfect for multispeaker TTS models, as it also supports auto-complete tags 🪄

Try it out in Gradio 5.40!
Hugging Face (Twitter)

RT @jackvial89: I've created a @LeRobotHF @huggingface dataset for the screwdriver robot. This dataset contains 391 human demonstrations of attaching a part with a screw in 3 positions: left, right, center. Currently training a few different models on this dataset!
Hugging Face (Twitter)

RT @RisingSayak: Wait is over 🤯

An Apache 2.0 DiT-based image generation model from @Alibaba_Qwen -- Qwen-Image 🔥

Supported in Diffusers. Training script PR is up and should be merged soon.

Go, fire!
Hugging Face (Twitter)

RT @romainhuet: A great day to be a developer! Stay tuned! 🤗
Hugging Face (Twitter)

RT @_lewtun: One line of code is all it takes to fine-tune the gpt-oss models from @OpenAI 🔥

> Support to target the MoE expert layers with PEFT
> Kernels for FlashAttention3 & MegaBlocks
> Fast inference with MXFP4 quantization format

In our testing, these models are extremely efficient to tune and can be adapted to new domains with just a few 100 samples 🤯

Download the models: huggingface.co/openai
Training & inference recipes: https://github.com/huggingface/gpt-oss-recipes/tree/main
Hugging Face (Twitter)

RT @mervenoyann: gpt-oss @OpenAI is here! 🔥

> two MoEs with 21B/3.6B and 117B/5.1B total/active params, efficient reasoning models 🤯
> use & fine-tune with transformers & TRL 🛠️
> inference powered by @huggingface Inference Providers 🫡
> apache 2.0 license 💗
Hugging Face (Twitter)

RT @multimodalart: the gpt-oss model is really easy to tune!

get started with customizing/fine-tuning to make gpt-oss your own with the @OpenAI + @huggingface cookbook 🤝

https://cookbook.openai.com/articles/gpt-oss/fine-tune-transfomers
Hugging Face (Twitter)

RT @reach_vb: OpenAI COOKED! That's an Apache 2.0 licensed 120B apache 2.0 licensed model competing with OpenAI O3 🤯

> 120B and 20B models
> 128K context
> First open model to be able to tool call in CoT
> Released with optimised kernels

Apache 2.0 license! What a landmark release - Kudos @OpenAIDevs 🤗
Hugging Face (Twitter)

RT @reach_vb: The best open model currently available on Inference Providers, blazing fast! Powered by @CerebrasSystems 🔥

Try it out today! https://twitter.com/reach_vb/status/1952782804023988557#m
Hugging Face (Twitter)

RT @ClementDelangue: When @sama told me at the AI summit in Paris that they were serious about releasing open-source models & asked what would be useful, I couldn’t believe it.

But six months of collaboration later, here it is: Welcome to OSS-GPT on @huggingface! It comes in two sizes, for both maximum reasoning capabilities & on-device, cheaper, faster option, all apache 2.0. It’s integrated with our inference partners that power the official demo.

This open-source release is critically important & timely, because as @WhiteHouse emphasized in the US Action plan, we need stronger American open-source AI foundations. And who could do that better than the very startup that has been pioneering and leading the field in so many ways.

Feels like a plot twist.
Feels like a comeback.
Feels like the beginning of something big, let’s go open-source AI 🔥🔥🔥