Hugging Face (Twitter)
RT @lhoestq: Let me explain why Hugging Face Datasets storage is faster than S3 + why today's release changes everything ๐งต
RT @lhoestq: Let me explain why Hugging Face Datasets storage is faster than S3 + why today's release changes everything ๐งต
Hugging Face (Twitter)
RT @kadirnardev: We're releasing a TTS model trained with a 350M parameter and 140,000-hour voice dataset as open source on the Vyvo account tomorrow ๐ Let's turn on notifications ๐
RT @kadirnardev: We're releasing a TTS model trained with a 350M parameter and 140,000-hour voice dataset as open source on the Vyvo account tomorrow ๐ Let's turn on notifications ๐
Hugging Face (Twitter)
RT @ClementDelangue: Fun to think about open-source models and their variants as families from an evolutionary biology standpoint and analyze "genetic similarity and mutation of traits over model families".
These are the 2,500th, 250th, 50th and 25th largest families on @huggingface: https://twitter.com/didaoh/status/1955381767420121283#m
RT @ClementDelangue: Fun to think about open-source models and their variants as families from an evolutionary biology standpoint and analyze "genetic similarity and mutation of traits over model families".
These are the 2,500th, 250th, 50th and 25th largest families on @huggingface: https://twitter.com/didaoh/status/1955381767420121283#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @NVIDIAAIDev: We just released 3 million samples of high quality vision language model training dataset for use cases such as:
๐ optical character recognition (OCR)
๐ visual question answering (VQA)
๐ captioning
๐ค Learn more: nvda.ws/4oyfevu
๐ฅ Download: nvda.ws/4fz2gtB
RT @NVIDIAAIDev: We just released 3 million samples of high quality vision language model training dataset for use cases such as:
๐ optical character recognition (OCR)
๐ visual question answering (VQA)
๐ captioning
๐ค Learn more: nvda.ws/4oyfevu
๐ฅ Download: nvda.ws/4fz2gtB
Hugging Face (Twitter)
RT @jxmnop: OpenAI hasnโt open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only...
or is it?
turns out that underneath the surface, there is still a strong base model. so we extracted it.
introducing gpt-oss-20b-base ๐งต
RT @jxmnop: OpenAI hasnโt open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only...
or is it?
turns out that underneath the surface, there is still a strong base model. so we extracted it.
introducing gpt-oss-20b-base ๐งต
Hugging Face (Twitter)
RT @BrigitteTousi: HAPPENING TODAY: Join @ClementDelangue for an AMA on the Hugging Face Discord!
โฐ 8am PST / 11am EST / 16h CET
๐ https://discord.com/invite/6r5TEXyk?event=1404451892179763311 https://twitter.com/BrigitteTousi/status/1955300164815462460#m
RT @BrigitteTousi: HAPPENING TODAY: Join @ClementDelangue for an AMA on the Hugging Face Discord!
โฐ 8am PST / 11am EST / 16h CET
๐ https://discord.com/invite/6r5TEXyk?event=1404451892179763311 https://twitter.com/BrigitteTousi/status/1955300164815462460#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: OpenAI gpt-oss 120B orchestrates a full video using Hugging Face spaces! ๐คฏ
All of it, in one SINGLE prompt:
create an image of a Labrador and use it to generate a simple video of it
๐ ๏ธ Tools used:
1. Flux.1 Krea Dev by @bfl_ml
2. LTX Fast by @Lightricks
That's it, gpt-oss 120B is one of the BEST open source models I've used for tool calling so far! Kudos @OpenAI ๐ค
RT @reach_vb: OpenAI gpt-oss 120B orchestrates a full video using Hugging Face spaces! ๐คฏ
All of it, in one SINGLE prompt:
create an image of a Labrador and use it to generate a simple video of it
๐ ๏ธ Tools used:
1. Flux.1 Krea Dev by @bfl_ml
2. LTX Fast by @Lightricks
That's it, gpt-oss 120B is one of the BEST open source models I've used for tool calling so far! Kudos @OpenAI ๐ค
Hugging Face (Twitter)
RT @mervenoyann: new TRL comes packed for vision language models ๐ฅ
we shipped support for
> native supervised fine-tuning for VLMs
> multimodal GRPO
> MPO ๐ซก
read all about it in our blog ๐ค next one!
RT @mervenoyann: new TRL comes packed for vision language models ๐ฅ
we shipped support for
> native supervised fine-tuning for VLMs
> multimodal GRPO
> MPO ๐ซก
read all about it in our blog ๐ค next one!
Hugging Face (Twitter)
RT @Xianbao_QIAN: A very interesting food dish dataset, if you're building a health app/model: 100 k carefully curated food samples spanning home-cooked meals, restaurant dishes, raw ingredients and packaged products.
How it was built is just as valuable
โข 50 k real users on Binance captured their own plates and pre-annotated by professional human annotators.
โข Machine-generated labels were then spot-checked and refined by Biance users to guarantee quality.
โข A slice of the dataset made available on Hugging Face under an OpenRail license.
Sounds like a new approach for crowdsourcing data collection.
Link below:
RT @Xianbao_QIAN: A very interesting food dish dataset, if you're building a health app/model: 100 k carefully curated food samples spanning home-cooked meals, restaurant dishes, raw ingredients and packaged products.
How it was built is just as valuable
โข 50 k real users on Binance captured their own plates and pre-annotated by professional human annotators.
โข Machine-generated labels were then spot-checked and refined by Biance users to guarantee quality.
โข A slice of the dataset made available on Hugging Face under an OpenRail license.
Sounds like a new approach for crowdsourcing data collection.
Link below: