Hugging Face (Twitter)
RT @BrigitteTousi: HAPPENING TODAY: Join @ClementDelangue for an AMA on the Hugging Face Discord!
⏰ 8am PST / 11am EST / 16h CET
🔗 https://discord.com/invite/6r5TEXyk?event=1404451892179763311 https://twitter.com/BrigitteTousi/status/1955300164815462460#m
RT @BrigitteTousi: HAPPENING TODAY: Join @ClementDelangue for an AMA on the Hugging Face Discord!
⏰ 8am PST / 11am EST / 16h CET
🔗 https://discord.com/invite/6r5TEXyk?event=1404451892179763311 https://twitter.com/BrigitteTousi/status/1955300164815462460#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: OpenAI gpt-oss 120B orchestrates a full video using Hugging Face spaces! 🤯
All of it, in one SINGLE prompt:
create an image of a Labrador and use it to generate a simple video of it
🛠️ Tools used:
1. Flux.1 Krea Dev by @bfl_ml
2. LTX Fast by @Lightricks
That's it, gpt-oss 120B is one of the BEST open source models I've used for tool calling so far! Kudos @OpenAI 🤗
RT @reach_vb: OpenAI gpt-oss 120B orchestrates a full video using Hugging Face spaces! 🤯
All of it, in one SINGLE prompt:
create an image of a Labrador and use it to generate a simple video of it
🛠️ Tools used:
1. Flux.1 Krea Dev by @bfl_ml
2. LTX Fast by @Lightricks
That's it, gpt-oss 120B is one of the BEST open source models I've used for tool calling so far! Kudos @OpenAI 🤗
Hugging Face (Twitter)
RT @mervenoyann: new TRL comes packed for vision language models 🔥
we shipped support for
> native supervised fine-tuning for VLMs
> multimodal GRPO
> MPO 🫡
read all about it in our blog 🤗 next one!
RT @mervenoyann: new TRL comes packed for vision language models 🔥
we shipped support for
> native supervised fine-tuning for VLMs
> multimodal GRPO
> MPO 🫡
read all about it in our blog 🤗 next one!
Hugging Face (Twitter)
RT @Xianbao_QIAN: A very interesting food dish dataset, if you're building a health app/model: 100 k carefully curated food samples spanning home-cooked meals, restaurant dishes, raw ingredients and packaged products.
How it was built is just as valuable
• 50 k real users on Binance captured their own plates and pre-annotated by professional human annotators.
• Machine-generated labels were then spot-checked and refined by Biance users to guarantee quality.
• A slice of the dataset made available on Hugging Face under an OpenRail license.
Sounds like a new approach for crowdsourcing data collection.
Link below:
RT @Xianbao_QIAN: A very interesting food dish dataset, if you're building a health app/model: 100 k carefully curated food samples spanning home-cooked meals, restaurant dishes, raw ingredients and packaged products.
How it was built is just as valuable
• 50 k real users on Binance captured their own plates and pre-annotated by professional human annotators.
• Machine-generated labels were then spot-checked and refined by Biance users to guarantee quality.
• A slice of the dataset made available on Hugging Face under an OpenRail license.
Sounds like a new approach for crowdsourcing data collection.
Link below:
Hugging Face (Twitter)
RT @allen_ai: With fresh support of $75M from @NSF and $77M from @nvidia, we’re set to scale our open model ecosystem, bolster the infrastructure behind it, and fast‑track reproducible AI research to unlock the next wave of scientific discovery. 💡
RT @allen_ai: With fresh support of $75M from @NSF and $77M from @nvidia, we’re set to scale our open model ecosystem, bolster the infrastructure behind it, and fast‑track reproducible AI research to unlock the next wave of scientific discovery. 💡
Hugging Face (Twitter)
RT @Xianbao_QIAN: A fully open-sourced, top tier Deep Research framework. Guess which one it is?
RT @Xianbao_QIAN: A fully open-sourced, top tier Deep Research framework. Guess which one it is?
Hugging Face (Twitter)
RT @brendanh0gan: introducing qqWen: our fully open-sourced project (code+weights+data+detailed technical report) for full-stack finetuning (pretrain+SFT+RL) a series of models (1.5b, 3b, 7b, 14b & 32b) for a niche financial programming language called Q
All details below!
RT @brendanh0gan: introducing qqWen: our fully open-sourced project (code+weights+data+detailed technical report) for full-stack finetuning (pretrain+SFT+RL) a series of models (1.5b, 3b, 7b, 14b & 32b) for a niche financial programming language called Q
All details below!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @TencentHunyuan: 🚀We are thrilled to open-source Hunyuan-GameCraft, a high-dynamic interactive game video generation framework built on HunyuanVideo.
It generates playable and physically realistic videos from a single scene image and user action signals, empowering creators and developers to "direct" games with first-person or third-person perspectives.
Key Advantages:
🔹High Dynamics: Unifies standard keyboard inputs into a shared continuous action space, enabling high-precision control over velocity and angle. This allows for the exploration of complex trajectories, overcoming the stiff, limited motion of traditional models. It can also generate dynamic environmental content like moving clouds, rain, snow, and water flow.
🔹Long-term Consistency: Uses hybrid history condition to preserve the original scene information after significant movement.
🔹Significant Cost Reduction: No need for expensive modeling/rendering. PCM distillation compresses...
Перейти на оригинальный пост
RT @TencentHunyuan: 🚀We are thrilled to open-source Hunyuan-GameCraft, a high-dynamic interactive game video generation framework built on HunyuanVideo.
It generates playable and physically realistic videos from a single scene image and user action signals, empowering creators and developers to "direct" games with first-person or third-person perspectives.
Key Advantages:
🔹High Dynamics: Unifies standard keyboard inputs into a shared continuous action space, enabling high-precision control over velocity and angle. This allows for the exploration of complex trajectories, overcoming the stiff, limited motion of traditional models. It can also generate dynamic environmental content like moving clouds, rain, snow, and water flow.
🔹Long-term Consistency: Uses hybrid history condition to preserve the original scene information after significant movement.
🔹Significant Cost Reduction: No need for expensive modeling/rendering. PCM distillation compresses...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @AIatMeta: Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks.
Learn more about DINOv3 here: https://ai.meta.com/blog/dinov3-self-supervised-vision-model/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=dinov3
RT @AIatMeta: Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks.
Learn more about DINOv3 here: https://ai.meta.com/blog/dinov3-self-supervised-vision-model/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=dinov3
Hugging Face (Twitter)
RT @googleaidevs: Introducing Gemma 3 270M! 🚀 It sets a new standard for instruction-following in compact models, while being extremely efficient for specialized tasks.
RT @googleaidevs: Introducing Gemma 3 270M! 🚀 It sets a new standard for instruction-following in compact models, while being extremely efficient for specialized tasks.
Googleblog
Google for Developers Blog - News about Web, Mobile, AI and Cloud
Explore Gemma 3 270M, a compact, energy-efficient AI model for task-specific fine-tuning, offering strong instruction-following and production-ready quantization.
Hugging Face (Twitter)
RT @cyrilzakka: DINO’s impact on the field is difficult to overstate and the new family of models is now available for download on the 🤗Hub.
If you’re working on medical imaging workflows, might be a good time to switch your vision backbone: https://huggingface.co/collections/facebook/dinov3-68924841bd6b561778e31009 https://twitter.com/AIatMeta/status/1956027795051831584#m
RT @cyrilzakka: DINO’s impact on the field is difficult to overstate and the new family of models is now available for download on the 🤗Hub.
If you’re working on medical imaging workflows, might be a good time to switch your vision backbone: https://huggingface.co/collections/facebook/dinov3-68924841bd6b561778e31009 https://twitter.com/AIatMeta/status/1956027795051831584#m
Hugging Face (Twitter)
RT @cgeorgiaw: Something cool coming...
HDF5 support will dramatically expand @huggingface support for scientific data
RT @cgeorgiaw: Something cool coming...
HDF5 support will dramatically expand @huggingface support for scientific data
Hugging Face (Twitter)
270 million (not billion) parameters! https://huggingface.co/google/gemma-3-270m⚡️⚡️⚡️ https://twitter.com/osanseviero/status/1956024223773663291#m
270 million (not billion) parameters! https://huggingface.co/google/gemma-3-270m⚡️⚡️⚡️ https://twitter.com/osanseviero/status/1956024223773663291#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: This week, the race for world models began.
Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model
- 25 FPS live video synthesis
- Realtime inputs
- Trained on ~1200 hours of video data
🤗 available now on Hugging Face
RT @dylan_ebert_: This week, the race for world models began.
Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model
- 25 FPS live video synthesis
- Realtime inputs
- Trained on ~1200 hours of video data
🤗 available now on Hugging Face
Hugging Face (Twitter)
RT @steren: Cloud Run 🤝 @Gradio
"gcloud run deploy" on a Gradio app now just works:
RT @steren: Cloud Run 🤝 @Gradio
"gcloud run deploy" on a Gradio app now just works:
Hugging Face (Twitter)
RT @RisingSayak: If you have been terribly annoyed by the long cold-starts when using `torch.compile` on Qwen-Image, please use regional compilation!
It cuts the cold-start timing by ~2x while retaining full compilation benefits 🔥
RT @RisingSayak: If you have been terribly annoyed by the long cold-starts when using `torch.compile` on Qwen-Image, please use regional compilation!
It cuts the cold-start timing by ~2x while retaining full compilation benefits 🔥
Hugging Face (Twitter)
RT @osanseviero: Some fun things people may have missed from Gemma 3 270M:
1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger 🤯
2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper specialized in a task or a specific language, as the model will work very well even with less common tokens.
3. We released both a pre-trained and an instruct model, enabling you to fine-tune for your needs.
4. We collaborated closely with the developer ecosystem to get this out, allowing you to use Hugging Face transformers and transformers.js, Ollama, Kaggle, LM Studio, Docker, LiteRT, Vertex, llama.cpp, Keras, MLX, Gemma.cpp, UnSloth, JAX, Cloud Run, and more.
https://huggingface.co/google/gemma-3-270m
RT @osanseviero: Some fun things people may have missed from Gemma 3 270M:
1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger 🤯
2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper specialized in a task or a specific language, as the model will work very well even with less common tokens.
3. We released both a pre-trained and an instruct model, enabling you to fine-tune for your needs.
4. We collaborated closely with the developer ecosystem to get this out, allowing you to use Hugging Face transformers and transformers.js, Ollama, Kaggle, LM Studio, Docker, LiteRT, Vertex, llama.cpp, Keras, MLX, Gemma.cpp, UnSloth, JAX, Cloud Run, and more.
https://huggingface.co/google/gemma-3-270m
huggingface.co
google/gemma-3-270m · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp
RT @abhinavexists_: Finally done with deploy my upscaling model on @huggingface.
> implementing Multi-Recurrent Branches from scratch.
> currently upscaling with a PSNR of 34.2 db , will be improving it.
hf deployment:
https://huggingface.co/Abhinavexists/SeeSharp