Hugging Face (Twitter)
RT @BrigitteTousi: This Wednesday Aug. 13 at 11 am EDT, join @huggingface on Discord for an AMA with our CEO @ClementDelangue.
No bullshit, just real talk.
Sign up link in thread. ๐ค
RT @BrigitteTousi: This Wednesday Aug. 13 at 11 am EDT, join @huggingface on Discord for an AMA with our CEO @ClementDelangue.
No bullshit, just real talk.
Sign up link in thread. ๐ค
Hugging Face (Twitter)
RT @Xianbao_QIAN: The new talking head model, EchoMimicV3, from Ant Group seems to be pretty cool.
Based on Wan 2.1 1.3B
RT @Xianbao_QIAN: The new talking head model, EchoMimicV3, from Ant Group seems to be pretty cool.
Based on Wan 2.1 1.3B
Hugging Face (Twitter)
RT @gdb: initial gpt-oss download stats looking exciting! https://twitter.com/reach_vb/status/1954909541805801799#m
RT @gdb: initial gpt-oss download stats looking exciting! https://twitter.com/reach_vb/status/1954909541805801799#m
โHugging Face (Twitter)
RT @vdivyasharma: ๐ข Excited to release IndicSynth, a large-scale synthetic speech dataset for 12 low-resource Indian languages โ winner of the Outstanding Paper Award at #ACL2025! ๐
๐ Dataset: https://huggingface.co/datasets/vdivyasharma/IndicSynth
๐ Paper: https://aclanthology.org/2025.acl-long.1070/
#SBILab #IIITD #NLProc #ACL2025NLP
RT @vdivyasharma: ๐ข Excited to release IndicSynth, a large-scale synthetic speech dataset for 12 low-resource Indian languages โ winner of the Outstanding Paper Award at #ACL2025! ๐
๐ Dataset: https://huggingface.co/datasets/vdivyasharma/IndicSynth
๐ Paper: https://aclanthology.org/2025.acl-long.1070/
#SBILab #IIITD #NLProc #ACL2025NLP
X (formerly Twitter)
#ACL2025 - Search / X
See posts about #ACL2025 on X. See what people are saying and join the conversation.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: InteriorGS: 3D Gaussian Splatting Dataset of Semantically Labeled Indoor Scenes
โญ๏ธnew dataset with:
- high-quality gaussian splatting scenes
- labeled bounding boxes
- navigation maps
available on hugging face
RT @dylan_ebert_: InteriorGS: 3D Gaussian Splatting Dataset of Semantically Labeled Indoor Scenes
โญ๏ธnew dataset with:
- high-quality gaussian splatting scenes
- labeled bounding boxes
- navigation maps
available on hugging face
Hugging Face (Twitter)
RT @multimodalart: ok i can't take it anymore: announcing the chatgpt image yellow tint corrector
a @huggingface space that runs locally on your browser to fix the yellow tint of the chatgpt generated images https://twitter.com/SherylHsu02/status/1954966109851119921#m
RT @multimodalart: ok i can't take it anymore: announcing the chatgpt image yellow tint corrector
a @huggingface space that runs locally on your browser to fix the yellow tint of the chatgpt generated images https://twitter.com/SherylHsu02/status/1954966109851119921#m
Hugging Face (Twitter)
RT @lunarflu1: We're excited to announce we're doing an AMA with @ClementDelangue the CEO of @huggingface tomorrow! Feel free to hop in and ask your open sourcey questions! ๐
https://discord.com/events/879548962464493619/1404451892179763311
RT @lunarflu1: We're excited to announce we're doing an AMA with @ClementDelangue the CEO of @huggingface tomorrow! Feel free to hop in and ask your open sourcey questions! ๐
https://discord.com/events/879548962464493619/1404451892179763311
Hugging Face (Twitter)
RT @Xianbao_QIAN: Very impressive multimodal understanding model from @Zai_org
- 106B A12B model
- MIT license model weights
- Supports grounding
- Able to handle GUI tasks
- Image/video understanding & long doc parsing.
RT @Xianbao_QIAN: Very impressive multimodal understanding model from @Zai_org
- 106B A12B model
- MIT license model weights
- Supports grounding
- Able to handle GUI tasks
- Image/video understanding & long doc parsing.
Hugging Face (Twitter)
RT @xunhuang1995: World model = Action Conditioned Self-Forcing
Very impressive work from @Skywork_ai. This is a glimpse into the future, and it's open-source to everyone! https://twitter.com/Skywork_ai/status/1955237399912648842#m
RT @xunhuang1995: World model = Action Conditioned Self-Forcing
Very impressive work from @Skywork_ai. This is a glimpse into the future, and it's open-source to everyone! https://twitter.com/Skywork_ai/status/1955237399912648842#m
Hugging Face (Twitter)
RT @levelsio: I really really like @jandotai
It's a very friendly app to locally run LLMs, great for privacy
I've tried others like LM Studio and Ollama and they're nice but very engineer-built, a bit too difficult for me
Jan is simple and cute and pretty and a great alternative to talk to without sending your data (and secrets ;)) to big AI providers
You can even run remote provider models too via API, if you do want that!
Also they're very responsive to feedback and always improving the app
I think there is space for both locally run LLM apps and cloud LLM apps, locally run makes sense if you wanna talk about very private stuff, therapy etc. It's really important people can have that without fearing your data might leak in the future
(I'm not affiliated or paid, just really like it!) https://twitter.com/jandotai/status/1955176280535732415#m
RT @levelsio: I really really like @jandotai
It's a very friendly app to locally run LLMs, great for privacy
I've tried others like LM Studio and Ollama and they're nice but very engineer-built, a bit too difficult for me
Jan is simple and cute and pretty and a great alternative to talk to without sending your data (and secrets ;)) to big AI providers
You can even run remote provider models too via API, if you do want that!
Also they're very responsive to feedback and always improving the app
I think there is space for both locally run LLM apps and cloud LLM apps, locally run makes sense if you wanna talk about very private stuff, therapy etc. It's really important people can have that without fearing your data might leak in the future
(I'm not affiliated or paid, just really like it!) https://twitter.com/jandotai/status/1955176280535732415#m
Hugging Face (Twitter)
RT @maximelabonne: Liquid just released two 450M and 1.6B param VLMs!
They're super fast and leverage SigLIP2 NaFlex encoders to handle native resolutions without distortion.
Available today on @huggingface!
RT @maximelabonne: Liquid just released two 450M and 1.6B param VLMs!
They're super fast and leverage SigLIP2 NaFlex encoders to handle native resolutions without distortion.
Available today on @huggingface!
Hugging Face (Twitter)
RT @ramin_m_h: meet LFM2-VL: an efficient Liquid vision-language model for the device class. open weights, 440M & 1.6B, up to 2ร faster on GPU with competitive accuracy, Native 512ร512, smart patching for big images.
efficiency is our product @LiquidAI_
download them on @huggingface:
https://huggingface.co/LiquidAI/LFM2-VL-1.6B
https://huggingface.co/LiquidAI/LFM2-VL-450M
read the blog post: https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models
RT @ramin_m_h: meet LFM2-VL: an efficient Liquid vision-language model for the device class. open weights, 440M & 1.6B, up to 2ร faster on GPU with competitive accuracy, Native 512ร512, smart patching for big images.
efficiency is our product @LiquidAI_
download them on @huggingface:
https://huggingface.co/LiquidAI/LFM2-VL-1.6B
https://huggingface.co/LiquidAI/LFM2-VL-450M
read the blog post: https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @jandotai: Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro.
In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally.
Use cases:
- Web search
- Deep Research
Built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.
You can run the model in Jan, llama.cpp, or vLLM. To enable search in Jan, go to Settings โ Experimental Features โ On, then Settings โ MCP Servers โ enable a search-related MCP such as Serper.
Use the model:
- Jan-v1-4B: https://huggingface.co/janhq/Jan-v1-4B
- Jan-v1-4B-GGUF: https://huggingface.co/janhq/Jan-v1-4B-GGUF
Credit to the @Alibaba_Qwen team for Qwen3 4B Thinking & @ggerganov for llama.cpp.
RT @jandotai: Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro.
In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally.
Use cases:
- Web search
- Deep Research
Built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.
You can run the model in Jan, llama.cpp, or vLLM. To enable search in Jan, go to Settings โ Experimental Features โ On, then Settings โ MCP Servers โ enable a search-related MCP such as Serper.
Use the model:
- Jan-v1-4B: https://huggingface.co/janhq/Jan-v1-4B
- Jan-v1-4B-GGUF: https://huggingface.co/janhq/Jan-v1-4B-GGUF
Credit to the @Alibaba_Qwen team for Qwen3 4B Thinking & @ggerganov for llama.cpp.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @Skywork_ai: Matrix-Game 2.0 โ The FIRST open-source, real-time, long-sequence interactive world model
Last week, DeepMind's Genie 3 shook the AI world with real-time interactive world models.
But... it wasn't open-sourced.
Today, Matrix-Game 2.0 changed the game. ๐
25FPS. Minutes-long interaction. Fully open-source.
RT @Skywork_ai: Matrix-Game 2.0 โ The FIRST open-source, real-time, long-sequence interactive world model
Last week, DeepMind's Genie 3 shook the AI world with real-time interactive world models.
But... it wasn't open-sourced.
Today, Matrix-Game 2.0 changed the game. ๐
25FPS. Minutes-long interaction. Fully open-source.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: Matrix Game 2.0 - Open source, real-time, interactive world model on Hugging Face! ๐ฅ
RT @reach_vb: Matrix Game 2.0 - Open source, real-time, interactive world model on Hugging Face! ๐ฅ
Hugging Face (Twitter)
RT @lhoestq: Let me explain why Hugging Face Datasets storage is faster than S3 + why today's release changes everything ๐งต
RT @lhoestq: Let me explain why Hugging Face Datasets storage is faster than S3 + why today's release changes everything ๐งต
Hugging Face (Twitter)
RT @kadirnardev: We're releasing a TTS model trained with a 350M parameter and 140,000-hour voice dataset as open source on the Vyvo account tomorrow ๐ Let's turn on notifications ๐
RT @kadirnardev: We're releasing a TTS model trained with a 350M parameter and 140,000-hour voice dataset as open source on the Vyvo account tomorrow ๐ Let's turn on notifications ๐