Hugging Face
60 subscribers
631 photos
224 videos
1.08K links
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @dylan_ebert_: InteriorGS: 3D Gaussian Splatting Dataset of Semantically Labeled Indoor Scenes

โญ๏ธnew dataset with:
- high-quality gaussian splatting scenes
- labeled bounding boxes
- navigation maps

available on hugging face
Hugging Face (Twitter)

RT @multimodalart: ok i can't take it anymore: announcing the chatgpt image yellow tint corrector

a @huggingface space that runs locally on your browser to fix the yellow tint of the chatgpt generated images https://twitter.com/SherylHsu02/status/1954966109851119921#m
Hugging Face (Twitter)

RT @lunarflu1: We're excited to announce we're doing an AMA with @ClementDelangue the CEO of @huggingface tomorrow! Feel free to hop in and ask your open sourcey questions! ๐Ÿš€
https://discord.com/events/879548962464493619/1404451892179763311
Hugging Face (Twitter)

RT @Xianbao_QIAN: Very impressive multimodal understanding model from @Zai_org

- 106B A12B model
- MIT license model weights
- Supports grounding
- Able to handle GUI tasks
- Image/video understanding & long doc parsing.
Hugging Face (Twitter)

RT @xunhuang1995: World model = Action Conditioned Self-Forcing

Very impressive work from @Skywork_ai. This is a glimpse into the future, and it's open-source to everyone! https://twitter.com/Skywork_ai/status/1955237399912648842#m
Hugging Face (Twitter)

RT @levelsio: I really really like @jandotai

It's a very friendly app to locally run LLMs, great for privacy

I've tried others like LM Studio and Ollama and they're nice but very engineer-built, a bit too difficult for me

Jan is simple and cute and pretty and a great alternative to talk to without sending your data (and secrets ;)) to big AI providers

You can even run remote provider models too via API, if you do want that!

Also they're very responsive to feedback and always improving the app

I think there is space for both locally run LLM apps and cloud LLM apps, locally run makes sense if you wanna talk about very private stuff, therapy etc. It's really important people can have that without fearing your data might leak in the future

(I'm not affiliated or paid, just really like it!) https://twitter.com/jandotai/status/1955176280535732415#m
Hugging Face (Twitter)

RT @maximelabonne: Liquid just released two 450M and 1.6B param VLMs!

They're super fast and leverage SigLIP2 NaFlex encoders to handle native resolutions without distortion.

Available today on @huggingface!
Hugging Face (Twitter)

RT @ramin_m_h: meet LFM2-VL: an efficient Liquid vision-language model for the device class. open weights, 440M & 1.6B, up to 2ร— faster on GPU with competitive accuracy, Native 512ร—512, smart patching for big images.

efficiency is our product @LiquidAI_

download them on @huggingface:
https://huggingface.co/LiquidAI/LFM2-VL-1.6B

https://huggingface.co/LiquidAI/LFM2-VL-450M

read the blog post: https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @jandotai: Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro.

In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally.

Use cases:
- Web search
- Deep Research

Built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.

You can run the model in Jan, llama.cpp, or vLLM. To enable search in Jan, go to Settings โ†’ Experimental Features โ†’ On, then Settings โ†’ MCP Servers โ†’ enable a search-related MCP such as Serper.

Use the model:
- Jan-v1-4B: https://huggingface.co/janhq/Jan-v1-4B
- Jan-v1-4B-GGUF: https://huggingface.co/janhq/Jan-v1-4B-GGUF

Credit to the @Alibaba_Qwen team for Qwen3 4B Thinking & @ggerganov for llama.cpp.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @Skywork_ai: Matrix-Game 2.0 โ€” The FIRST open-source, real-time, long-sequence interactive world model

Last week, DeepMind's Genie 3 shook the AI world with real-time interactive world models.

But... it wasn't open-sourced.

Today, Matrix-Game 2.0 changed the game. ๐Ÿš€

25FPS. Minutes-long interaction. Fully open-source.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @reach_vb: Matrix Game 2.0 - Open source, real-time, interactive world model on Hugging Face! ๐Ÿ”ฅ
Hugging Face (Twitter)

RT @lhoestq: Let me explain why Hugging Face Datasets storage is faster than S3 + why today's release changes everything ๐Ÿงต
Hugging Face (Twitter)

RT @kadirnardev: We're releasing a TTS model trained with a 350M parameter and 140,000-hour voice dataset as open source on the Vyvo account tomorrow ๐ŸŽ‰ Let's turn on notifications ๐Ÿ””
Hugging Face (Twitter)

RT @ClementDelangue: Fun to think about open-source models and their variants as families from an evolutionary biology standpoint and analyze "genetic similarity and mutation of traits over model families".

These are the 2,500th, 250th, 50th and 25th largest families on @huggingface: https://twitter.com/didaoh/status/1955381767420121283#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @NVIDIAAIDev: We just released 3 million samples of high quality vision language model training dataset for use cases such as:

๐Ÿ“„ optical character recognition (OCR)
๐Ÿ“Š visual question answering (VQA)
๐Ÿ“ captioning

๐Ÿค— Learn more: nvda.ws/4oyfevu
๐Ÿ“ฅ Download: nvda.ws/4fz2gtB
Hugging Face (Twitter)

RT @jxmnop: OpenAI hasnโ€™t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only...

or is it?

turns out that underneath the surface, there is still a strong base model. so we extracted it.

introducing gpt-oss-20b-base ๐Ÿงต