Hugging Face (Twitter)
RT @scaling01: Grok-2 got open-sourced
same arch as grok-1
https://huggingface.co/xai-org/grok-2/
RT @scaling01: Grok-2 got open-sourced
same arch as grok-1
https://huggingface.co/xai-org/grok-2/
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @heyshrutimishra: Hugging Face quietly dropped FREE courses with certification
It cover everything from LLMs to diffusion models.
Here are the best ones you should bookmark today π§΅π
RT @heyshrutimishra: Hugging Face quietly dropped FREE courses with certification
It cover everything from LLMs to diffusion models.
Here are the best ones you should bookmark today π§΅π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: Microsoft just released VibeVoice - 1.5B SoTA Text to Speech model - MIT Licensed π₯
> It can generate up 90 minutes of audio
> Supports simultaneous generation of > 4 speakers
> Streaming and larger 7B model in-coming
> Capable of cross-lingual and singing synthesis
Love the expressiveness and the emotion control on the model! Kudos to Microsoft π€
RT @reach_vb: Microsoft just released VibeVoice - 1.5B SoTA Text to Speech model - MIT Licensed π₯
> It can generate up 90 minutes of audio
> Supports simultaneous generation of > 4 speakers
> Streaming and larger 7B model in-coming
> Capable of cross-lingual and singing synthesis
Love the expressiveness and the emotion control on the model! Kudos to Microsoft π€
Hugging Face (Twitter)
RT @multimodalart: Nano Banana is now available on @huggingface Spaces for free for PRO users! π€ π€π
RT @multimodalart: Nano Banana is now available on @huggingface Spaces for free for PRO users! π€ π€π
Hugging Face (Twitter)
RT @mervenoyann: I'm back from vacation and so many VLMs were actually released today π€
so hyped to be back π
RT @mervenoyann: I'm back from vacation and so many VLMs were actually released today π€
so hyped to be back π
βHugging Face (Twitter)
RT @MichaelDell: Great to see @elonmusk and @xai open-sourcing Grok 2.5! ππ€π
This further democratizes AI, sparks global innovation, and pushes the industry forward.
Starting tomorrow morning, it will be available on the Dell Enterprise Hub @DellTech + @huggingface π€
dell.huggingface.co/ https://twitter.com/elonmusk/status/1959379349322313920#m
RT @MichaelDell: Great to see @elonmusk and @xai open-sourcing Grok 2.5! ππ€π
This further democratizes AI, sparks global innovation, and pushes the industry forward.
Starting tomorrow morning, it will be available on the Dell Enterprise Hub @DellTech + @huggingface π€
dell.huggingface.co/ https://twitter.com/elonmusk/status/1959379349322313920#m
X (formerly Twitter)
Michael Dell πΊπΈ (@MichaelDell) on X
Great to see @elonmusk and @xai open-sourcing Grok 2.5! ππ€π
This further democratizes AI, sparks global innovation, and pushes the industry forward.
Starting tomorrow morning, it will be available on the Dell Enterprise Hub @DellTech + @huggingface π€
This further democratizes AI, sparks global innovation, and pushes the industry forward.
Starting tomorrow morning, it will be available on the Dell Enterprise Hub @DellTech + @huggingface π€
Hugging Face (Twitter)
RT @mervenoyann: first vision language model built off @OpenAI gpt-oss just dropped! π₯
InternVL3.5 comes with 32 models π€― pre-trained, fine-tuned, aligned in various sizes
comes with gpt-oss or Qwen3 for LLM part ‡οΈ
RT @mervenoyann: first vision language model built off @OpenAI gpt-oss just dropped! π₯
InternVL3.5 comes with 32 models π€― pre-trained, fine-tuned, aligned in various sizes
comes with gpt-oss or Qwen3 for LLM part ‡οΈ
Hugging Face (Twitter)
RT @victormustar: Cool: Nano Banana is now available on Hugging Face for all PRO users π
RT @victormustar: Cool: Nano Banana is now available on Hugging Face for all PRO users π
Hugging Face (Twitter)
RT @NousResearch: Nous Research presents Hermes 4, our latest line of hybrid reasoning models.
hermes4.nousresearch.com
Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities.
Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.
RT @NousResearch: Nous Research presents Hermes 4, our latest line of hybrid reasoning models.
hermes4.nousresearch.com
Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities.
Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.
Hugging Face (Twitter)
RT @SOSOHAJALAB: $9 / month can do everythings in @huggingface β€οΈ https://twitter.com/victormustar/status/1960422110527926766#m
RT @SOSOHAJALAB: $9 / month can do everythings in @huggingface β€οΈ https://twitter.com/victormustar/status/1960422110527926766#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Alibaba_Wan: πIntroducing Wan2.2-S2V β a 14B parameter model designed for film-grade, audio-driven human animation. π¬Going beyond basic talking heads to deliver professional-level quality for film, TV, and digital content. And itβs open-source!
β¨ Key features:
πΉ Long-video dynamic consistency
πΉ Cinema-quality audio-to-video generation
πΉ Advanced motion and environment control via instruction
Perfect for filmmakers, content creators, and developers crafting immersive AI-powered stories.
Try it now : wan.video/
Github: github.com/Wan-Video/Wan2.2
Project: https://humanaigc.github.io/wan-s2v-webpage
Hugging Face Demo: https://huggingface.co/spaces/Wan-AI/Wan2.2-S2V
Modelscope Demo: https://www.modelscope.cn/studios/Wan-AI/Wan2.2-S2V
Hugging Face Weights: https://huggingface.co/Wan-AI/Wan2.2-S2V-14B
ModelScope Weights: https://www.modelscope.cn/models/Wan-AI/Wan2.2-S2V-14B
RT @Alibaba_Wan: πIntroducing Wan2.2-S2V β a 14B parameter model designed for film-grade, audio-driven human animation. π¬Going beyond basic talking heads to deliver professional-level quality for film, TV, and digital content. And itβs open-source!
β¨ Key features:
πΉ Long-video dynamic consistency
πΉ Cinema-quality audio-to-video generation
πΉ Advanced motion and environment control via instruction
Perfect for filmmakers, content creators, and developers crafting immersive AI-powered stories.
Try it now : wan.video/
Github: github.com/Wan-Video/Wan2.2
Project: https://humanaigc.github.io/wan-s2v-webpage
Hugging Face Demo: https://huggingface.co/spaces/Wan-AI/Wan2.2-S2V
Modelscope Demo: https://www.modelscope.cn/studios/Wan-AI/Wan2.2-S2V
Hugging Face Weights: https://huggingface.co/Wan-AI/Wan2.2-S2V-14B
ModelScope Weights: https://www.modelscope.cn/models/Wan-AI/Wan2.2-S2V-14B
Hugging Face (Twitter)
RT @reach_vb: LETS GOOO! 2,000,000+ PUBLIC REPOS ON THE HUB π₯
from 100K to 2M in the last couple years has been surreal - onwards and upwards! π€
RT @reach_vb: LETS GOOO! 2,000,000+ PUBLIC REPOS ON THE HUB π₯
from 100K to 2M in the last couple years has been surreal - onwards and upwards! π€
βHugging Face (Twitter)
RT @HaixuanT: Worked 9 month on building AV1 codec for AI and robotics and this is what I learned for streaming, training, and storage!
Detailed report here:
RT @HaixuanT: Worked 9 month on building AV1 codec for AI and robotics and this is what I learned for streaming, training, and storage!
Detailed report here:
huggingface.co
AV1 for robotics AI streaming, training and storage.
A Blog post by haixuan tao on Hugging Face
βHugging Face (Twitter)
RT @victormustar: Another OpenAI release on Hugging Face π
https://huggingface.co/datasets/openai/healthbench
RT @victormustar: Another OpenAI release on Hugging Face π
https://huggingface.co/datasets/openai/healthbench
huggingface.co
openai/healthbench Β· Datasets at Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @Thom_Wolf: Little know fact I realized talking with a researcher: the explosion of action-controlled World Models is also powered by strongly improved open-source video models.
Again open-source is enabling teams to explore, tweak and share mind blowing new use-cases far from original idea
RT @Thom_Wolf: Little know fact I realized talking with a researcher: the explosion of action-controlled World Models is also powered by strongly improved open-source video models.
Again open-source is enabling teams to explore, tweak and share mind blowing new use-cases far from original idea
Hugging Face (Twitter)
RT @NVIDIAAIDev: Ranked #1 on @Meta's Physical Reasoning Leaderboard on @huggingface for a reason. π π₯ π
Cosmos Reason enables robots and AI agents to reason like humans by leveraging prior knowledge, physics, and common sense to intelligently interact with the real world.
This state-of-the-art reasoning VLM excels in physical AI applications like:
π Data curation and annotation
π€ Robot planning and reasoning
βΆοΈ Video analytics AI agents
See the leaderboard β nvda.ws/4mLUmjd
Check out Cosmos Reason β nvda.ws/425mMfF
RT @NVIDIAAIDev: Ranked #1 on @Meta's Physical Reasoning Leaderboard on @huggingface for a reason. π π₯ π
Cosmos Reason enables robots and AI agents to reason like humans by leveraging prior knowledge, physics, and common sense to intelligently interact with the real world.
This state-of-the-art reasoning VLM excels in physical AI applications like:
π Data curation and annotation
π€ Robot planning and reasoning
βΆοΈ Video analytics AI agents
See the leaderboard β nvda.ws/4mLUmjd
Check out Cosmos Reason β nvda.ws/425mMfF
Hugging Face (Twitter)
RT @mervenoyann: MiniCPM-V 4.5 is very good! π€
it comes with hybrid thinking: it decides when to think on it's own π
it also can handle high res documents with odd aspect ratios, and super long videos efficiently ππ»
see below hybrid results β€΅οΈ model is in comments!
RT @mervenoyann: MiniCPM-V 4.5 is very good! π€
it comes with hybrid thinking: it decides when to think on it's own π
it also can handle high res documents with odd aspect ratios, and super long videos efficiently ππ»
see below hybrid results β€΅οΈ model is in comments!