This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
RT @Xianbao_QIAN: ToonComposer: You can now efficiently make cartoons on @huggingface for free
- Input: sketch based key frames + color reference frame
- This @Alibaba_Wan based model will combine in-betweening & colorization
- Model can also imagine areas left blank with a prompt
- Result: save up to 70% of manual work.
Huge thanks to B&T Studio and Gudong Animation Studio for their permission to use their animation content (Big Fish & Begonia and Mr. Miao) for academic illustration.
Hugging Face (Twitter)
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face 🤯 https://twitter.com/reach_vb/status/1957148807562723809#m
RT @reach_vb: BEST PART: they released the entire 1 MILLION hours of data publicly on Hugging Face 🤯 https://twitter.com/reach_vb/status/1957148807562723809#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works 👇
RT @dylan_ebert_: I automated my research discovery.
Claude Code + Hugging Face MCP + Research MCP (my server)
It makes discovering and keeping track of all related research artifacts MUCH faster and easier
here's how it works 👇
Hugging Face (Twitter)
RT @arundhati1504: 🎉 Introducing Granary — a 1M-hour, open multilingual speech dataset — plus new #opensource ASR models. 🌍
.
🤗 Now on HuggingFace: nvda.ws/3Jg0BwV
🔗 Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
RT @arundhati1504: 🎉 Introducing Granary — a 1M-hour, open multilingual speech dataset — plus new #opensource ASR models. 🌍
.
🤗 Now on HuggingFace: nvda.ws/3Jg0BwV
🔗 Learn more: nvda.ws/41DVP2s bit.ly/4mGyMMA
X (formerly Twitter)
#opensource - Search / X
See posts about #opensource on X. See what people are saying and join the conversation.
Hugging Face (Twitter)
RT @AdinaYakup: Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!🫡
Meet Qwen Image Edit 🔥 the image editing version of Qwen-Image by @Alibaba_Qwen
https://huggingface.co/Qwen/Qwen-Image-Edit
✨ Apache 2.0
✨ Semantic + Appearance Editing: rotate, restyle, add/remove 🎨
✨ Precise Text Editing → edit CN/EN text, keep style
RT @AdinaYakup: Before my vacation: Qwen releasing.
When I came back: Qwen still releasing
Respect!!🫡
Meet Qwen Image Edit 🔥 the image editing version of Qwen-Image by @Alibaba_Qwen
https://huggingface.co/Qwen/Qwen-Image-Edit
✨ Apache 2.0
✨ Semantic + Appearance Editing: rotate, restyle, add/remove 🎨
✨ Precise Text Editing → edit CN/EN text, keep style
huggingface.co
Qwen/Qwen-Image-Edit · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @VentureBeat: Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
RT @VentureBeat: Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
VentureBeat
Nvidia releases a new small, open model Nemotron-Nano-9B-v2 with toggle on/off reasoning
Developers are free to create and distribute derivative models. Importantly, Nvidia does not claim ownership of any outputs generated...
Hugging Face (Twitter)
RT @ClementDelangue: Do we have an org or posts with all the cool Japanese releases on HF? Which ones are the most interesting ones?
RT @ClementDelangue: Do we have an org or posts with all the cool Japanese releases on HF? Which ones are the most interesting ones?
vxTwitter / fixvx
💖 172 🔁 23
💖 172 🔁 23
あるふ (@alfredplpl)
Hugging Face、もっと日本ではやるべきだと思うんだよね。やはり俺がハギングフェイスジャパンを作るしかないのか
Hugging Face (Twitter)
RT @gm8xx8: NVIDIA Nemotron-Nano v2
Models: 12B Base, 9B Reasoning, 9B Base
- Arch: Hybrid Mamba2–Transformer (128K ctx, 4 attn layers)
- Training: 10.6T tokens (3.5T synthetic from DeepSeek, Qwen, Nemotron-4, phi-4, etc.)
- 15 natural languages + 43 programming languages
- Datasets: Nemotron-CC v2 + Nemotron-CC-Math (133B tokens, 5.5× FineMath)
Benchmarks
- Math: 91.4 GSM8K CoT, 63.6 MATH L5, +30→56.7 AIME
- Code: 58.5 HumanEval+, 58.9 MBPP+
- Commonsense: 90.7 ARC, 79.9 HellaSwag
- Long-context: 82.2 RULER-128K
Highlights
- Nemotron-CC-Math: First scalable pipeline using Lynx + LLM cleanup to preserve LaTeX + code in web data. Delivers SOTA boosts (+12.6 MATH, +14.3 MBPP+) vs prior open math sets
- Efficiency: Distilled 12B→9B (480B tokens), ~1.5e24 FLOPs, ~724 MWh disclosed
- Deployment: Hugging Face, NGC, NeMo, TRT-LLM, vLLM | GPU-optimized
- Open: Models, datasets, and full extraction pipelines released
RT @gm8xx8: NVIDIA Nemotron-Nano v2
Models: 12B Base, 9B Reasoning, 9B Base
- Arch: Hybrid Mamba2–Transformer (128K ctx, 4 attn layers)
- Training: 10.6T tokens (3.5T synthetic from DeepSeek, Qwen, Nemotron-4, phi-4, etc.)
- 15 natural languages + 43 programming languages
- Datasets: Nemotron-CC v2 + Nemotron-CC-Math (133B tokens, 5.5× FineMath)
Benchmarks
- Math: 91.4 GSM8K CoT, 63.6 MATH L5, +30→56.7 AIME
- Code: 58.5 HumanEval+, 58.9 MBPP+
- Commonsense: 90.7 ARC, 79.9 HellaSwag
- Long-context: 82.2 RULER-128K
Highlights
- Nemotron-CC-Math: First scalable pipeline using Lynx + LLM cleanup to preserve LaTeX + code in web data. Delivers SOTA boosts (+12.6 MATH, +14.3 MBPP+) vs prior open math sets
- Efficiency: Distilled 12B→9B (480B tokens), ~1.5e24 FLOPs, ~724 MWh disclosed
- Deployment: Hugging Face, NGC, NeMo, TRT-LLM, vLLM | GPU-optimized
- Open: Models, datasets, and full extraction pipelines released
Hugging Face (Twitter)
RT @ctnzr: Today we're releasing NVIDIA Nemotron Nano v2 - a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate.
Along with this model, we are also releasing most of the data we used to create it, including the pretraining corpus.
Links to the models, datasets, and tech report are here:
https://research.nvidia.com/labs/adlr/NVIDIA-Nemotron-Nano-2/
RT @ctnzr: Today we're releasing NVIDIA Nemotron Nano v2 - a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate.
Along with this model, we are also releasing most of the data we used to create it, including the pretraining corpus.
Links to the models, datasets, and tech report are here:
https://research.nvidia.com/labs/adlr/NVIDIA-Nemotron-Nano-2/
Hugging Face (Twitter)
RT @NielsRogge: Ok ngl this is cool! The end of LoRa's??
Powered by @FAL as inference provider. Try it out below! https://twitter.com/Alibaba_Qwen/status/1957500569029079083#m
RT @NielsRogge: Ok ngl this is cool! The end of LoRa's??
Powered by @FAL as inference provider. Try it out below! https://twitter.com/Alibaba_Qwen/status/1957500569029079083#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @maximelabonne: LFM2-VL support with GGUF and llama.cpp 🥳
You can now run these tiny, hyper-efficient VLMs on your watch!
We released quantized checkpoints for LFM2-VL-450M and LFM2-VL-1.6B on @huggingface
RT @maximelabonne: LFM2-VL support with GGUF and llama.cpp 🥳
You can now run these tiny, hyper-efficient VLMs on your watch!
We released quantized checkpoints for LFM2-VL-450M and LFM2-VL-1.6B on @huggingface
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @multimodalart: IT'S OUT! 🚀 MoDA: Multi-modal Diffusion Architecture for Talking Head Generation
finally a talking head:
open source 🏋️
fast ⚡
portrait + audio-driven 🧑🎨🎧
with emotion control
(and yes, i built an inference system + Gradio, generate in < 15s on @huggingface spaces 🤗)
RT @multimodalart: IT'S OUT! 🚀 MoDA: Multi-modal Diffusion Architecture for Talking Head Generation
finally a talking head:
open source 🏋️
fast ⚡
portrait + audio-driven 🧑🎨🎧
with emotion control
(and yes, i built an inference system + Gradio, generate in < 15s on @huggingface spaces 🤗)
Hugging Face (Twitter)
RT @HaihaoShen: GLMs:
https://huggingface.co/Intel/GLM-4.5-gguf-q2ks-mixed-AutoRound
https://huggingface.co/Intel/GLM-4.5V-int4-AutoRound
#intel #autoround @thukeg
RT @HaihaoShen: GLMs:
https://huggingface.co/Intel/GLM-4.5-gguf-q2ks-mixed-AutoRound
https://huggingface.co/Intel/GLM-4.5V-int4-AutoRound
#intel #autoround @thukeg
huggingface.co
Intel/GLM-4.5-int4-mixed-AutoRound · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @Xianbao_QIAN: nano-banana, qwen-image-edit, what else?
Try @StepFun_ai NextStep-1-Large-Edit
- 14B AR model
- Apache 2 license
- Demo available on @huggingface
- Pretrain model also made available
Link below
RT @Xianbao_QIAN: nano-banana, qwen-image-edit, what else?
Try @StepFun_ai NextStep-1-Large-Edit
- 14B AR model
- Apache 2 license
- Demo available on @huggingface
- Pretrain model also made available
Link below
Hugging Face (Twitter)
RT @allen_ai: We’re releasing early pre-training checkpoints for OLMo-2-1B to help study how LLM capabilities emerge. They’re fine-grained snapshots intended for analysis, reproduction, and comparison. 🧵
RT @allen_ai: We’re releasing early pre-training checkpoints for OLMo-2-1B to help study how LLM capabilities emerge. They’re fine-grained snapshots intended for analysis, reproduction, and comparison. 🧵
Hugging Face (Twitter)
RT @RisingSayak: It's out friends!
Really great to see the state of things in image edits, video fidelity being pushed further and further, thanks to the community!
This release also features new fine-tuning scripts for Qwen-Image and Flux Kontext (with support for image inputs). So, get busy making these models your own 🤗
We also improved the loading speed of Diffusers pipelines & models. This will become particularly evident when operating with large models like Wan, Qwen, etc.
Release notes: https://github.com/huggingface/diffusers/releases/tag/v0.35.0
RT @RisingSayak: It's out friends!
Really great to see the state of things in image edits, video fidelity being pushed further and further, thanks to the community!
This release also features new fine-tuning scripts for Qwen-Image and Flux Kontext (with support for image inputs). So, get busy making these models your own 🤗
We also improved the loading speed of Diffusers pipelines & models. This will become particularly evident when operating with large models like Wan, Qwen, etc.
Release notes: https://github.com/huggingface/diffusers/releases/tag/v0.35.0
Hugging Face (Twitter)
RT @ClementDelangue: Deepseek just released a new model! https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base
RT @ClementDelangue: Deepseek just released a new model! https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base
Hugging Face (Twitter)
RT @ClementDelangue: Just crossed 20M monthly requests with @huggingface inference providers, our router for open models.
@CerebrasSystems @novita_labs & @FireworksAI_HQ are growing the fastest!
It's now powering the official open playground from @OpenAI & integrate with apps like @cline & @roo_code.
Let's go!
RT @ClementDelangue: Just crossed 20M monthly requests with @huggingface inference providers, our router for open models.
@CerebrasSystems @novita_labs & @FireworksAI_HQ are growing the fastest!
It's now powering the official open playground from @OpenAI & integrate with apps like @cline & @roo_code.
Let's go!