Hugging Face (Twitter)
RT @iScienceLuvr: If you need to know how much time left you have to submit your paper, you can check "AI Conference Deadlines"
before there used to be a separate website maintained by PapersWithCode, but since PapersWithCode was shut down, it's now available on HuggingFace
RT @iScienceLuvr: If you need to know how much time left you have to submit your paper, you can check "AI Conference Deadlines"
before there used to be a separate website maintained by PapersWithCode, but since PapersWithCode was shut down, it's now available on HuggingFace
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @mervenoyann: upgrade your transformers π₯
it comes with insanely capable models like SAM2, KOSMOS2.5, Florence-2 and more π€
I built a notebook you can run with free Colab T4 to walk through the API for new models ππ»ββοΈ fine-tuning will follow-up soon!
RT @mervenoyann: upgrade your transformers π₯
it comes with insanely capable models like SAM2, KOSMOS2.5, Florence-2 and more π€
I built a notebook you can run with free Colab T4 to walk through the API for new models ππ»ββοΈ fine-tuning will follow-up soon!
Hugging Face (Twitter)
RT @MaziyarPanahi: Introducing MultiCaRe, open-source, multimodal clinical case datasets on @HuggingFace by @OpenMed_AI Community. Public and ready for load_dataset.
Images: 160K+ figures/subimages
Cases: 85K de-identified narratives + demographics
Articles: 85K metadata + abstracts
π§΅ (1/7)
RT @MaziyarPanahi: Introducing MultiCaRe, open-source, multimodal clinical case datasets on @HuggingFace by @OpenMed_AI Community. Public and ready for load_dataset.
Images: 160K+ figures/subimages
Cases: 85K de-identified narratives + demographics
Articles: 85K metadata + abstracts
π§΅ (1/7)
Hugging Face (Twitter)
RT @Tim_Dettmers: It feels the coding agent frontier is now open-weights:
GLM 4.5 costs only $3/month and is on par with Sonnet
Kimi K2.1 Turbo is 3x speed, 7x cheaper vs Opus 4.1, but as good
Kimi K2.1 feels clean. The best model for me. GPT-5 is only good for complicated specs -- too slow.
RT @Tim_Dettmers: It feels the coding agent frontier is now open-weights:
GLM 4.5 costs only $3/month and is on par with Sonnet
Kimi K2.1 Turbo is 3x speed, 7x cheaper vs Opus 4.1, but as good
Kimi K2.1 feels clean. The best model for me. GPT-5 is only good for complicated specs -- too slow.
Hugging Face (Twitter)
RT @HuggingPapers: Meta researchers just unveiled Set Block Decoding on Hugging Face.
It's a game-changer for language model inference, delivering 3-5x speedup in token generation with existing models.
No architectural changes needed, matches previous performance.
RT @HuggingPapers: Meta researchers just unveiled Set Block Decoding on Hugging Face.
It's a game-changer for language model inference, delivering 3-5x speedup in token generation with existing models.
No architectural changes needed, matches previous performance.
Hugging Face (Twitter)
RT @Xianbao_QIAN: The new @TencentHunyuan image 2.1 model is really cool.
It reminds me of @Zai_org GLM 4.1. I love how these researchers being humble and calling great improvement 0.1
Both model & demo released on @huggingface
RT @Xianbao_QIAN: The new @TencentHunyuan image 2.1 model is really cool.
It reminds me of @Zai_org GLM 4.1. I love how these researchers being humble and calling great improvement 0.1
Both model & demo released on @huggingface
Hugging Face (Twitter)
RT @tomaarsen: ModernBERT goes MULTILINGUAL!
One of the most requested models I've seen, @jhuclsp has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT.
Stronger than an existing models at their sizes, while also much faster!
Details in π§΅
RT @tomaarsen: ModernBERT goes MULTILINGUAL!
One of the most requested models I've seen, @jhuclsp has trained state-of-the-art massively multilingual encoders using the ModernBERT architecture: mmBERT.
Stronger than an existing models at their sizes, while also much faster!
Details in π§΅
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @adrgrondin: I gave SmolLM3 by @huggingface a voice π£οΈ
Hereβs a demo of me talking with the model hands-free on iPhone, thanks to built-in voice activity detection
Everything runs fully on-device, powered by Apple MLX
RT @adrgrondin: I gave SmolLM3 by @huggingface a voice π£οΈ
Hereβs a demo of me talking with the model hands-free on iPhone, thanks to built-in voice activity detection
Everything runs fully on-device, powered by Apple MLX
βHugging Face (Twitter)
RT @daftengine: aaaaand we're live on @huggingface documentation! Thank you to @lhoestq, @vanstriendaniel and the Hugging Face team for all their help pushing this through and excited for our continued collaboration!
na2.hubs.ly/H010TDt0
#Daft #HuggingFace #Multimodal #OpenSource
RT @daftengine: aaaaand we're live on @huggingface documentation! Thank you to @lhoestq, @vanstriendaniel and the Hugging Face team for all their help pushing this through and excited for our continued collaboration!
na2.hubs.ly/H010TDt0
#Daft #HuggingFace #Multimodal #OpenSource
huggingface.co
Daft
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark
- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
RT @vanstriendaniel: Visual-TableQA: Complex Table Reasoning Benchmark
- 2.5K - tables with 6K QA pairs
- Multi-step reasoning over visual structures
- 92% human validation agreement
- Under $100 generation cost
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
Our π»πβ―β― new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m
Our π»πβ―β― new experiment tracking library now supports logging images, videos, tables, and of course metrics. https://twitter.com/abidlabs/status/1965828375681142903#m