Hugging Face (Twitter)
RT @vectro: Quantized gpt-oss-20b on @huggingface
Made to run locally on your computer
RT @vectro: Quantized gpt-oss-20b on @huggingface
Made to run locally on your computer
Hugging Face (Twitter)
RT @vanstriendaniel: You can now generate synthetic data using @OpenAI's GPT OSS models on @huggingface Jobs!
One command, no setup:
hf jobs uv run --flavor l4x4 [script-url] \
--input-dataset your/dataset \
--output-dataset your/output
Works on L4 GPUs ⚡
https://huggingface.co/datasets/uv-scripts/openai-oss
RT @vanstriendaniel: You can now generate synthetic data using @OpenAI's GPT OSS models on @huggingface Jobs!
One command, no setup:
hf jobs uv run --flavor l4x4 [script-url] \
--input-dataset your/dataset \
--output-dataset your/output
Works on L4 GPUs ⚡
https://huggingface.co/datasets/uv-scripts/openai-oss
huggingface.co
uv-scripts/openai-oss · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @ArtificialAnlys: Link to question dataset on @huggingface:
https://huggingface.co/datasets/ArtificialAnalysis/AA-LCR
RT @ArtificialAnlys: Link to question dataset on @huggingface:
https://huggingface.co/datasets/ArtificialAnalysis/AA-LCR
huggingface.co
ArtificialAnalysis/AA-LCR · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @HaihaoShen: 🫡Probably the first INT4 GPT-OSS model: , should be friendly to most of the existing HWs. The next will be 120B INT4 model soon.
#intel #huggingface #openai #gpt #autoround
RT @HaihaoShen: 🫡Probably the first INT4 GPT-OSS model: , should be friendly to most of the existing HWs. The next will be 120B INT4 model soon.
#intel #huggingface #openai #gpt #autoround
huggingface.co
Intel/gpt-oss-20b-int4-rtn-AutoRound · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Hugging Face (Twitter)
RT @MaziyarPanahi: 🚀 In under 10 hours, both gpt-oss models from @OpenAI are now trending at #1 and #2 on @huggingface!
Openness wins again.
@sama, can you feel the love tonight? 🎶
RT @MaziyarPanahi: 🚀 In under 10 hours, both gpt-oss models from @OpenAI are now trending at #1 and #2 on @huggingface!
Openness wins again.
@sama, can you feel the love tonight? 🎶
Hugging Face (Twitter)
RT @wjb_mattingly: Woot! First finetune of Dots.OCR on @huggingface ! Haven't done this since Qwen 2 VL. I'll be sharing the finetuning script tomorrow.
RT @wjb_mattingly: Woot! First finetune of Dots.OCR on @huggingface ! Haven't done this since Qwen 2 VL. I'll be sharing the finetuning script tomorrow.
Hugging Face (Twitter)
RT @cline: Live in Cline via @huggingface: gpt-oss-120b & gpt-oss-20b
> gpt-oss-120b is built for production-grade applications with high reasoning capabilities
> gpt-oss-20b is for lower latency needs and specialized local use cases
Also accessible via the Cline & OpenRouter providers
RT @cline: Live in Cline via @huggingface: gpt-oss-120b & gpt-oss-20b
> gpt-oss-120b is built for production-grade applications with high reasoning capabilities
> gpt-oss-20b is for lower latency needs and specialized local use cases
Also accessible via the Cline & OpenRouter providers
Hugging Face (Twitter)
RT @reach_vb: Want to run the latest @OpenAI gpt-oss models with Continuous Batching, Tensor Parallelism, Flash Attention 3 and more?
Checkout our detailed Inference and Fine-tuning recipes 🤗
RT @reach_vb: Want to run the latest @OpenAI gpt-oss models with Continuous Batching, Tensor Parallelism, Flash Attention 3 and more?
Checkout our detailed Inference and Fine-tuning recipes 🤗
Hugging Face (Twitter)
RT @ClementDelangue: Lots of conflicting takes about gpt-oss (yay open-source in the spotlight)!
We’re powering the official @openai demo gpt-oss.com with HF inference providers thanks to @FireworksAI_HQ, @CerebrasSystems, @GroqInc and @togethercompute so we have a front-row seat of what’s happening.
Something to remember: inference for new frontier open models isn’t easy, especially with a new format like harmony and the volume of interest that gpt-oss is getting out of the gate.
Early spikes can temporarily affect quality, accuracy, and overall "vibes," particularly just 24 hours post-release when providers are racing against the clock with barely any sleep!
Some advice to avoid forming the wrong opinions:
- If you care about getting vibes as fast as possible, use a hosted setup that gives you a diversity of providers like HF inference providers (https://huggingface.co/docs/inference-providers/guides/gpt-oss) and follow official standard...
Перейти на оригинальный пост
RT @ClementDelangue: Lots of conflicting takes about gpt-oss (yay open-source in the spotlight)!
We’re powering the official @openai demo gpt-oss.com with HF inference providers thanks to @FireworksAI_HQ, @CerebrasSystems, @GroqInc and @togethercompute so we have a front-row seat of what’s happening.
Something to remember: inference for new frontier open models isn’t easy, especially with a new format like harmony and the volume of interest that gpt-oss is getting out of the gate.
Early spikes can temporarily affect quality, accuracy, and overall "vibes," particularly just 24 hours post-release when providers are racing against the clock with barely any sleep!
Some advice to avoid forming the wrong opinions:
- If you care about getting vibes as fast as possible, use a hosted setup that gives you a diversity of providers like HF inference providers (https://huggingface.co/docs/inference-providers/guides/gpt-oss) and follow official standard...
Перейти на оригинальный пост
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @dylan_ebert_: @dylan_ebert_: Hugging Face Explained in 45 seconds https://twitter.com/hamiltonsucks76/status/1952813334102983029#m
RT @dylan_ebert_: @dylan_ebert_: Hugging Face Explained in 45 seconds https://twitter.com/hamiltonsucks76/status/1952813334102983029#m
Hugging Face (Twitter)
RT @elonmusk: @BasedBeffJezos: It’s high time we open sourced Grok 2. Will make it happen next week.
We’ve just been fighting fires and burning the 4am oil nonstop for a while now.
RT @elonmusk: @BasedBeffJezos: It’s high time we open sourced Grok 2. Will make it happen next week.
We’ve just been fighting fires and burning the 4am oil nonstop for a while now.
Hugging Face (Twitter)
RT @romainhuet: Great perspective and tips from @ClementDelangue on the early days of gpt-oss.
Building strong foundations take time, and we’re excited to be working with inference providers and @huggingface to help these models perform at their best. Please keep your feedback coming! https://twitter.com/ClementDelangue/status/1953119901649891367#m
RT @romainhuet: Great perspective and tips from @ClementDelangue on the early days of gpt-oss.
Building strong foundations take time, and we’re excited to be working with inference providers and @huggingface to help these models perform at their best. Please keep your feedback coming! https://twitter.com/ClementDelangue/status/1953119901649891367#m
Hugging Face (Twitter)
RT @romainhuet: Both gpt-oss models are trending #1 and #2 among 2M models on @huggingface! 🤗 Thanks to the open-source AI community for your support since launch.
We’re following discussions and will pop in when we can—feel free to ask questions, share ideas, and show what you’re building!
RT @romainhuet: Both gpt-oss models are trending #1 and #2 among 2M models on @huggingface! 🤗 Thanks to the open-source AI community for your support since launch.
We’re following discussions and will pop in when we can—feel free to ask questions, share ideas, and show what you’re building!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @roo_code: ICYMI: Roo Code now integrates with Hugging Face 🤗
Plug in your API key, explore 90+ models, and run them directly from your editor—no wrappers, no token copy-paste.
Try it now!
RT @roo_code: ICYMI: Roo Code now integrates with Hugging Face 🤗
Plug in your API key, explore 90+ models, and run them directly from your editor—no wrappers, no token copy-paste.
Try it now!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @reach_vb: BOOOOM! You can now run @OpenAI gpt-oss 20B natively in @GoogleColab T4 for FREE! 🔥
Powered by Transformers ⚡
The setup takes a bit since everything is bleeding edge, but once done it should work as expected
Link to our cookbook in comments 👇
RT @reach_vb: BOOOOM! You can now run @OpenAI gpt-oss 20B natively in @GoogleColab T4 for FREE! 🔥
Powered by Transformers ⚡
The setup takes a bit since everything is bleeding edge, but once done it should work as expected
Link to our cookbook in comments 👇
Hugging Face (Twitter)
RT @calebfahlgren: The @huggingface trending is filled with absolute bangers 😮💨
RT @calebfahlgren: The @huggingface trending is filled with absolute bangers 😮💨
Hugging Face (Twitter)
RT @mervenoyann: new TRL comes with GRPO & MPO support for vision language models 💥
we also dropped an explainer on them & how to train with one-liner CLI commands 🫡
RT @mervenoyann: new TRL comes with GRPO & MPO support for vision language models 💥
we also dropped an explainer on them & how to train with one-liner CLI commands 🫡
Hugging Face (Twitter)
RT @abidlabs: Don't lock in your experiment tracking data into proprietary vendors!
With Trackio, all of your metrics are stored in a (public or private, you choose) Hugging Face Dataset, so that you can export them at any time!
https://github.com/gradio-app/trackio
RT @abidlabs: Don't lock in your experiment tracking data into proprietary vendors!
With Trackio, all of your metrics are stored in a (public or private, you choose) Hugging Face Dataset, so that you can export them at any time!
https://github.com/gradio-app/trackio