βHugging Face (Twitter)
RT @AdinaYakup: Latest update from @Kimi_Moonshot
Kimi K2 >>> Kimi K2-Instruct-0905π₯
https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905
β¨ 32B activated / 1T total parameters
β¨ Enhanced agentic coding intelligence
β¨ Better frontend coding experience
β¨ 256K context window for long horizon tasks
RT @AdinaYakup: Latest update from @Kimi_Moonshot
Kimi K2 >>> Kimi K2-Instruct-0905π₯
https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905
β¨ 32B activated / 1T total parameters
β¨ Enhanced agentic coding intelligence
β¨ Better frontend coding experience
β¨ 256K context window for long horizon tasks
huggingface.co
moonshotai/Kimi-K2-Instruct-0905 Β· Hugging Face
Weβre on a journey to advance and democratize artificial intelligence through open source and open science.
βHugging Face (Twitter)
RT @steren: @Gradio: What I really like about @Gradio is that you focus on your app's inputs, outputs, and logic, and then the framework derives a UI for these.
Because UI is a derivative, Gradio was also able to generate an API for the same inputs, and now is able to generate an MCP server. No change required.
RT @steren: @Gradio: What I really like about @Gradio is that you focus on your app's inputs, outputs, and logic, and then the framework derives a UI for these.
Because UI is a derivative, Gradio was also able to generate an API for the same inputs, and now is able to generate an MCP server. No change required.
vxTwitter / fixvx
π 5 π 1
π 5 π 1
Gradio (@Gradio)
Gradio offers key benefits for MCP server developers:
> One-line CLI deployment to Hugging Face and Google Cloud
> Free UI for humans-in-the-loop tests and cases
> Authenticated access
> Latency and performance data,
Detailed report π https://hugginβ¦
> One-line CLI deployment to Hugging Face and Google Cloud
> Free UI for humans-in-the-loop tests and cases
> Authenticated access
> Latency and performance data,
Detailed report π https://hugginβ¦
Hugging Face (Twitter)
RT @ClementDelangue: Weβre doing the work that nobody else wants to do! Welcome to FineVision, the best free open dataset to train vision language models. Letβs go open-source! https://twitter.com/andimarafioti/status/1963610118165000479#m
RT @ClementDelangue: Weβre doing the work that nobody else wants to do! Welcome to FineVision, the best free open dataset to train vision language models. Letβs go open-source! https://twitter.com/andimarafioti/status/1963610118165000479#m
Hugging Face (Twitter)
RT @crystalsssup: landing on π€
> 256k context
> 60β100 TPS
> perfect for claude code/codex/roo etc. https://twitter.com/Kimi_Moonshot/status/1963802687230947698#m
RT @crystalsssup: landing on π€
> 256k context
> 60β100 TPS
> perfect for claude code/codex/roo etc. https://twitter.com/Kimi_Moonshot/status/1963802687230947698#m
Hugging Face (Twitter)
RT @ADarmouni: Honestly FineVision is a pretty impressive work of aggregation
200 training sets condensed in a dataset of 18B images, segmented in 9 different subcategories, multi-turn, with quality rating and very documented ablation studies?
As always, @huggingface delivers in open data
RT @ADarmouni: Honestly FineVision is a pretty impressive work of aggregation
200 training sets condensed in a dataset of 18B images, segmented in 9 different subcategories, multi-turn, with quality rating and very documented ablation studies?
As always, @huggingface delivers in open data
Hugging Face (Twitter)
RT @antoine_chaffin: Today is a big day
Today is Silksong day
But most importantly, today is the day I finally got HF socks!!!
RT @antoine_chaffin: Today is a big day
Today is Silksong day
But most importantly, today is the day I finally got HF socks!!!
Hugging Face (Twitter)
RT @maximelabonne: Liquid AI Japan cooked with this 350M param model on par with GPT-4o for English β Japanese translation
That's a really nice example of fine-tuning done right π
RT @maximelabonne: Liquid AI Japan cooked with this 350M param model on par with GPT-4o for English β Japanese translation
That's a really nice example of fine-tuning done right π
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @mirkokiefer: My 2.5-year-old son controlling a robotic arm for the first time β and he genuinely picked it up faster than I did. He absolutely loves robots. The next generation will take over faster than we can blink.
Thatβs the @LeRobotHF so101, by the way.
RT @mirkokiefer: My 2.5-year-old son controlling a robotic arm for the first time β and he genuinely picked it up faster than I did. He absolutely loves robots. The next generation will take over faster than we can blink.
Thatβs the @LeRobotHF so101, by the way.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @LeRobotHF: π Big news: we just added Reachy 2 to LeRobot!
Huge thanks to our friends at @pollenrobotics ππ€
Reachy 2 is also available in simulation, so you can try it out right away.
π₯ Check out the teleop & autonomous demo below!
RT @LeRobotHF: π Big news: we just added Reachy 2 to LeRobot!
Huge thanks to our friends at @pollenrobotics ππ€
Reachy 2 is also available in simulation, so you can try it out right away.
π₯ Check out the teleop & autonomous demo below!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @fffiloni: Quietly landed on the hub
you can try ROSE on @huggingface π€
β> https://huggingface.co/spaces/Kunbyte/ROSE https://twitter.com/Almorgand/status/1962846321372471755#m
RT @fffiloni: Quietly landed on the hub
you can try ROSE on @huggingface π€
β> https://huggingface.co/spaces/Kunbyte/ROSE https://twitter.com/Almorgand/status/1962846321372471755#m
Hugging Face (Twitter)
RT @Thom_Wolf: This is huge
Continuing our foundational work to enable anyone to train state of the art AI model, weβre thrilled to release Β« FinePDFs Β»
3T tokens of textual data that until now was locked away in PDFs, arguably some of the highest quality publicly available data out there.
We gathered FinePDF to create the largest permissively licensed corpus sourced exclusively from PDFs.
Amazingly challenging infra and processing work, h/t to the fineweb team https://twitter.com/HKydlicek/status/1964584936524124645#m
RT @Thom_Wolf: This is huge
Continuing our foundational work to enable anyone to train state of the art AI model, weβre thrilled to release Β« FinePDFs Β»
3T tokens of textual data that until now was locked away in PDFs, arguably some of the highest quality publicly available data out there.
We gathered FinePDF to create the largest permissively licensed corpus sourced exclusively from PDFs.
Amazingly challenging infra and processing work, h/t to the fineweb team https://twitter.com/HKydlicek/status/1964584936524124645#m
Hugging Face (Twitter)
RT @HKydlicek: We are releasing π FinePDFs:
the largest PDF dataset spanning over half a billion documents!
- Long context: Documents are 2x longer than web text
- 3T tokens from high-demand domains like legal and science.
- Heavily improves over SoTA when mixed with FW-EDU&DCLM web copora.
RT @HKydlicek: We are releasing π FinePDFs:
the largest PDF dataset spanning over half a billion documents!
- Long context: Documents are 2x longer than web text
- 3T tokens from high-demand domains like legal and science.
- Heavily improves over SoTA when mixed with FW-EDU&DCLM web copora.
Hugging Face (Twitter)
RT @gpj: Released a new synthetic dataset: 1.5k [human] β 10k [synthetic] childrenβs stories.
Pipeline generated by @Kilo_Code and model switching from @poe_platform API ππ€
https://huggingface.co/datasets/garethpaul/children-stories-dataset
RT @gpj: Released a new synthetic dataset: 1.5k [human] β 10k [synthetic] childrenβs stories.
Pipeline generated by @Kilo_Code and model switching from @poe_platform API ππ€
https://huggingface.co/datasets/garethpaul/children-stories-dataset
Hugging Face (Twitter)
RT @maximelabonne: Pheww, another banger dataset from @huggingface!
> 3T tokens, 475M PDFs, 1733 languages
> Close to Nemotron-CC v2 and FineWeb-Edu+DCLM on its own (βΌοΈ)
> Greatly boosts perf when combined, likely because it provides high diversity that complements the other datasets well
RT @maximelabonne: Pheww, another banger dataset from @huggingface!
> 3T tokens, 475M PDFs, 1733 languages
> Close to Nemotron-CC v2 and FineWeb-Edu+DCLM on its own (βΌοΈ)
> Greatly boosts perf when combined, likely because it provides high diversity that complements the other datasets well
Hugging Face (Twitter)
RT @TrackioApp: Trackio represents @huggingface's effort to democratize experiment tracking for the community:
> absolutely free,
> open-source,
> local-first
> drop-in alternative to commercial solutions
RT @TrackioApp: Trackio represents @huggingface's effort to democratize experiment tracking for the community:
> absolutely free,
> open-source,
> local-first
> drop-in alternative to commercial solutions
Hugging Face (Twitter)
RT @OfirPress: 3 out of the top 6 most downloaded datasets on @huggingface are SWE-bench related.
Thanks!!! β₯οΈ
RT @OfirPress: 3 out of the top 6 most downloaded datasets on @huggingface are SWE-bench related.
Thanks!!! β₯οΈ
Hugging Face (Twitter)
RT @TencentHunyuan: We did it! We now have two models in the top two spots on the @huggingface trending charts.
π₯ Hunyuan-MT-7B
π₯ HunyuanWorld-Voyager
Download and deploy the models for free on Hugging Face and GitHub. Your stars and feedback are welcome! ππβ€οΈ
This is just the beginning. Stay tuned for our next open-source release next week!
RT @TencentHunyuan: We did it! We now have two models in the top two spots on the @huggingface trending charts.
π₯ Hunyuan-MT-7B
π₯ HunyuanWorld-Voyager
Download and deploy the models for free on Hugging Face and GitHub. Your stars and feedback are welcome! ππβ€οΈ
This is just the beginning. Stay tuned for our next open-source release next week!