Hugging Face
61 subscribers
640 photos
227 videos
1.1K links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @AIatMeta: Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks.

Learn more about DINOv3 here: https://ai.meta.com/blog/dinov3-self-supervised-vision-model/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=dinov3
Hugging Face (Twitter)

RT @cyrilzakka: DINO’s impact on the field is difficult to overstate and the new family of models is now available for download on the 🤗Hub.
If you’re working on medical imaging workflows, might be a good time to switch your vision backbone: https://huggingface.co/collections/facebook/dinov3-68924841bd6b561778e31009 https://twitter.com/AIatMeta/status/1956027795051831584#m
Hugging Face (Twitter)

RT @cgeorgiaw: Something cool coming...

HDF5 support will dramatically expand @huggingface support for scientific data
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @dylan_ebert_: This week, the race for world models began.

Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model

- 25 FPS live video synthesis
- Realtime inputs
- Trained on ~1200 hours of video data

🤗 available now on Hugging Face
Hugging Face (Twitter)

RT @steren: Cloud Run 🤝 @Gradio
"gcloud run deploy" on a Gradio app now just works:
Hugging Face (Twitter)

RT @RisingSayak: If you have been terribly annoyed by the long cold-starts when using `torch.compile` on Qwen-Image, please use regional compilation!

It cuts the cold-start timing by ~2x while retaining full compilation benefits 🔥
Hugging Face (Twitter)

RT @osanseviero: Some fun things people may have missed from Gemma 3 270M:

1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger 🤯
2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper specialized in a task or a specific language, as the model will work very well even with less common tokens.
3. We released both a pre-trained and an instruct model, enabling you to fine-tune for your needs.
4. We collaborated closely with the developer ecosystem to get this out, allowing you to use Hugging Face transformers and transformers.js, Ollama, Kaggle, LM Studio, Docker, LiteRT, Vertex, llama.cpp, Keras, MLX, Gemma.cpp, UnSloth, JAX, Cloud Run, and more.

https://huggingface.co/google/gemma-3-270m
Hugging Face (Twitter)

RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.

hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
Hugging Face (Twitter)

RT @gm8xx8: Gemma 3 270M joins the family.
⮕ More smol models, please.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @TencentHunyuan: We've heard the community! 📣📣📣

Following the open-source release of our Hunyuan 3D World Model 1.0, we're excited to introduce the new 1.0-Lite version, optimized for consumer-grade GPUs!
This is the first open-source, explorable world generation model compatible with CG pipelines, now more accessible than ever.

Key Technical Optimizations:
🔹Dynamic FP8 Quantization: We’ve cut VRAM requirements by 35%—from 26GB to under 17GB—making it easy to run on consumer GPUs without compromising performance.
🔹SageAttention Quantization: Our method quantizes the Q, K, and V matrices in the Transformer to INT8, combined with dynamic smoothing and hardware optimizations, to achieve an inference speedup of over 3x with less than 1% precision loss.
🔹Cache Algorithm Acceleration: By optimizing redundant time steps, we've significantly improved inference efficiency for a smoother user experience.

Now, developers can run a complex world model...

Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @xenovacom: Google just released their smallest Gemma model ever: Gemma 3 270M! 🤯
🤏 Highly compact & efficient
🤖 Strong instruction-following capabilities
🔧 Perfect candidate for fine-tuning

It's so tiny that it can even run 100% locally in your browser with Transformers.js! 🤗
Hugging Face (Twitter)

RT @QGallouedec: 🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️

This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇

More here 👉 hf.co/blog/trl-vlm-alignment
Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 🔥
Hugging Face (Twitter)

RT @Tu7uruu: 🚀 Big update: Open ASR goes multilingual!

We’re kicking off with 🇩🇪🇫🇷🇮🇹🇪🇸🇵🇹 — German, French, Italian, Spanish & Portuguese.
English ASR has reached a strong level of maturity, so we’re exploring new languages 🌍

More languages coming soon... Which one should we add next?
Hugging Face (Twitter)

RT @Zai_org: Just saw GLM-4.5V is trending #2 on Hugging Face
https://huggingface.co/zai-org/GLM-4.5V
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @kadirnardev: We have released our LFM2-350M based TTS model as open source 🚀 We have also released many different FT models.

GPU Platform: @hyperbolic_labs
Data: Emilia + Emilia Yodas(EN)
LLM Model: LFM2-350M @LiquidAI_
Disk and Space: @huggingface

I'm very happy to have released this model as open source. Many thanks to @VyvoSmartChain

#opensource #speech #tts #huggingface #lfm #gpu
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @jetbrains: We didn’t just build Mellum for us.

We open-sourced it for everyone.

Props to @huggingface for helping us get it out there 👌

Find out more about Mellum here: jb.gg/mbz8bq