Hugging Face (Twitter)
RT @calebfahlgren: The @huggingface Inference Providers is getting even easier to use! Now with a unified OpenAI client route.
Just use the model id and it works. You can also set your preferred provider with `:groq` for example.
Here's how easy it is to use @GroqInc and Kimi K2
RT @calebfahlgren: The @huggingface Inference Providers is getting even easier to use! Now with a unified OpenAI client route.
Just use the model id and it works. You can also set your preferred provider with `:groq` for example.
Here's how easy it is to use @GroqInc and Kimi K2
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @cline: π€π€π€
π€β€οΈπ€ @huggingface & Cline = your LLM playground
π€π€π€
You can access Kimi K2 & 6,140 (!) other open source models in Cline.
RT @cline: π€π€π€
π€β€οΈπ€ @huggingface & Cline = your LLM playground
π€π€π€
You can access Kimi K2 & 6,140 (!) other open source models in Cline.
βHugging Face (Twitter)
RT @marimo_io: Announcing molab: a cloud-hosted marimo notebook workspace with link-based sharing.
Experiment on AI, ML and data using the worldβs best Python (and SQL!) notebook.
Launching with examples from @huggingface, @weights_biases, and using @PyTorch
https://marimo.io/blog/announcing-molab
RT @marimo_io: Announcing molab: a cloud-hosted marimo notebook workspace with link-based sharing.
Experiment on AI, ML and data using the worldβs best Python (and SQL!) notebook.
Launching with examples from @huggingface, @weights_biases, and using @PyTorch
https://marimo.io/blog/announcing-molab
marimo.io
Announcing molab
Cloud-based notebooks for our community
Hugging Face (Twitter)
RT @cline: Here's how you can use the @huggingface provider in Cline π€
(thread)
RT @cline: Here's how you can use the @huggingface provider in Cline π€
(thread)
Hugging Face (Twitter)
RT @Wauplin: Big update: Hugging Face Inference Providers now work out of the box with the OpenAI client!
Just add the provider name to the model ID and youβre good to go: "moonshotai/Kimi-K2-Instruct:groq"
RT @Wauplin: Big update: Hugging Face Inference Providers now work out of the box with the OpenAI client!
Just add the provider name to the model ID and youβre good to go: "moonshotai/Kimi-K2-Instruct:groq"
Hugging Face (Twitter)
RT @arcprize: ARC-AGI-3 Preview games need to be pressure tested. Weβre hosting a 30-day agent competition in partnership with @huggingface
Weβre calling on the community to build agents (and win money!)
https://arcprize.org/competitions/arc-agi-3-preview-agents/
RT @arcprize: ARC-AGI-3 Preview games need to be pressure tested. Weβre hosting a 30-day agent competition in partnership with @huggingface
Weβre calling on the community to build agents (and win money!)
https://arcprize.org/competitions/arc-agi-3-preview-agents/
Hugging Face (Twitter)
RT @NVIDIAAIDev: π£ Announcing the release of OpenReasoning-Nemotron: a suite of reasoning-capable LLMs which have been distilled from the DeepSeek R1 0528 671B model. Trained on a massive, high-quality dataset distilled from the new DeepSeek R1 0528, our new 7B, 14B, and 32B models achieve SOTA perf on a wide range of reasoning benchmarks for their respective sizes in the domain of mathematics, science and code. The models are available on @huggingfaceπ€: nvda.ws/456WifL
RT @NVIDIAAIDev: π£ Announcing the release of OpenReasoning-Nemotron: a suite of reasoning-capable LLMs which have been distilled from the DeepSeek R1 0528 671B model. Trained on a massive, high-quality dataset distilled from the new DeepSeek R1 0528, our new 7B, 14B, and 32B models achieve SOTA perf on a wide range of reasoning benchmarks for their respective sizes in the domain of mathematics, science and code. The models are available on @huggingfaceπ€: nvda.ws/456WifL
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @hugobowne: Training big models used to be reserved for OpenAI or DeepMind.
Now? Builders everywhere have access to clusters of 4090s, Modal credits, and open-weight models like LLaMA 3 and Qwen. π οΈ
In this episode of @VanishingData, @TheZachMueller (@huggingface ), joins me to break down what scaling actually looks like in 2025 for individual devs and small teams:
β’ When to leave Colab and how not to drown in infra the moment you do
β’ How Accelerate simplifies training and inference across multiple GPUs
β’ Why βdata parallelismβ is just the start and where things break
β’ Lessons from helping everyone from solo devs to research labs scale up
β’ What people still get wrong about distributed training and inference
Links in π§΅
1/
RT @hugobowne: Training big models used to be reserved for OpenAI or DeepMind.
Now? Builders everywhere have access to clusters of 4090s, Modal credits, and open-weight models like LLaMA 3 and Qwen. π οΈ
In this episode of @VanishingData, @TheZachMueller (@huggingface ), joins me to break down what scaling actually looks like in 2025 for individual devs and small teams:
β’ When to leave Colab and how not to drown in infra the moment you do
β’ How Accelerate simplifies training and inference across multiple GPUs
β’ Why βdata parallelismβ is just the start and where things break
β’ Lessons from helping everyone from solo devs to research labs scale up
β’ What people still get wrong about distributed training and inference
Links in π§΅
1/
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @NVIDIAAIDev: πΆ Meet Audio-Flamingo 3 β a fully open LALM trained on sound, speech, and music datasets. πΆ
Handles 10-min audio, long-form text, and voice conversations. Perfect for audio QA, dialog, and reasoning.
On @huggingface β‘οΈ https://huggingface.co/nvidia/audio-flamingo-3
From #NVIDIAResearch.
RT @NVIDIAAIDev: πΆ Meet Audio-Flamingo 3 β a fully open LALM trained on sound, speech, and music datasets. πΆ
Handles 10-min audio, long-form text, and voice conversations. Perfect for audio QA, dialog, and reasoning.
On @huggingface β‘οΈ https://huggingface.co/nvidia/audio-flamingo-3
From #NVIDIAResearch.
Hugging Face (Twitter)
RT @reach_vb: Qwen COOKED - beats Kimi K2 and competitive to Claude Opus 4 at 25% total parameters π€―
RT @reach_vb: Qwen COOKED - beats Kimi K2 and competitive to Claude Opus 4 at 25% total parameters π€―
Hugging Face (Twitter)
RT @reach_vb: missed this, @NVIDIAAIDev silently dropped Open Reasoning Nemotron models (1.5-32B), SoTA on LiveCodeBench, CC-BY 4.0 licensed π₯
> 32B competing with Qwen3 235B and DeepSeek R1
> Available across 1.5B, 7B, 14B and 32B size
> Supports upto 64K output tokens
> Utilises GenSelect (combines multiple parallel generations)
> Built on top of Qwen 2.5 series
> Allows commercial usage
Works out of the box in transformers, vllm, mlx, llama.cpp and more!
RT @reach_vb: missed this, @NVIDIAAIDev silently dropped Open Reasoning Nemotron models (1.5-32B), SoTA on LiveCodeBench, CC-BY 4.0 licensed π₯
> 32B competing with Qwen3 235B and DeepSeek R1
> Available across 1.5B, 7B, 14B and 32B size
> Supports upto 64K output tokens
> Utilises GenSelect (combines multiple parallel generations)
> Built on top of Qwen 2.5 series
> Allows commercial usage
Works out of the box in transformers, vllm, mlx, llama.cpp and more!
Hugging Face (Twitter)
RT @lhoestq: A new Pandas feature landed 3 days ago and no one noticed.
Upload ONLY THE NEW DATA to dedupe-based storage like @huggingface (Xet). Data that already exist in other files don't need to be uploaded.
Possible thanks to the recent addition of Content Defined Chunking for Parquet.
RT @lhoestq: A new Pandas feature landed 3 days ago and no one noticed.
Upload ONLY THE NEW DATA to dedupe-based storage like @huggingface (Xet). Data that already exist in other files don't need to be uploaded.
Possible thanks to the recent addition of Content Defined Chunking for Parquet.
Hugging Face (Twitter)
RT @casper_hansen_: This is not a SMALL update. This is huge! Give us this for every model please Qwen teamπ
RT @casper_hansen_: This is not a SMALL update. This is huge! Give us this for every model please Qwen teamπ
Hugging Face (Twitter)
RT @nic_o_martin: Beyond happy to announce that I'm joining π€ @huggingface as a #MachineLearningEngineer focused on #WebML!
RT @nic_o_martin: Beyond happy to announce that I'm joining π€ @huggingface as a #MachineLearningEngineer focused on #WebML!
Hugging Face (Twitter)
RT @ClementDelangue: Now number one trending dataset on @huggingface, out of almost half a million! huggingface.co/datasets https://twitter.com/NousResearch/status/1945181587600982450#m
RT @ClementDelangue: Now number one trending dataset on @huggingface, out of almost half a million! huggingface.co/datasets https://twitter.com/NousResearch/status/1945181587600982450#m
Hugging Face (Twitter)
RT @MaziyarPanahi: Perfect Sunday: I just used Kimi-K2 by @Kimi_Moonshot to vibe code a @Gradio app! π₯
You can use "Anycoder" Space by @_akhaliq hosted on @huggingface for free. It was super quick! π€
PS: I am aware of using Gradio to vibe code another Gradio! Pun very much intended here! π
RT @MaziyarPanahi: Perfect Sunday: I just used Kimi-K2 by @Kimi_Moonshot to vibe code a @Gradio app! π₯
You can use "Anycoder" Space by @_akhaliq hosted on @huggingface for free. It was super quick! π€
PS: I am aware of using Gradio to vibe code another Gradio! Pun very much intended here! π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @AdinaYakup: From paper to project page in one clickπ
AnyCoder π₯ turns research PDFs into structured, shareable project pages in seconds!
https://huggingface.co/spaces/akhaliq/anycoder
Powered by 8 SoTA open models on @huggingface
RT @AdinaYakup: From paper to project page in one clickπ
AnyCoder π₯ turns research PDFs into structured, shareable project pages in seconds!
https://huggingface.co/spaces/akhaliq/anycoder
Powered by 8 SoTA open models on @huggingface
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @vitrupo: Jack Dorsey says AI must be permissionless because constraint kills innovation.
Five CEOs shouldn't dictate what brings humanity forward.
Open source is the answer.
To protect ourselves, we have to race ahead. Eliminating single points of failure before they become civilization's choke points.
RT @vitrupo: Jack Dorsey says AI must be permissionless because constraint kills innovation.
Five CEOs shouldn't dictate what brings humanity forward.
Open source is the answer.
To protect ourselves, we have to race ahead. Eliminating single points of failure before they become civilization's choke points.