Hugging Face (Twitter)
RT @LoubnaBenAllal1: After ~4 years building SOTA models & datasets, we're sharing everything we learned in ⚡The Smol Training Playbook
We cover the full LLM cycle: designing ablations, choosing an architecture, curating data, post-training, and building solid infrastructure.
We'll help you navigate the messy training reality that LLM papers don't cover. Chapter highlights in the 🧵
RT @LoubnaBenAllal1: After ~4 years building SOTA models & datasets, we're sharing everything we learned in ⚡The Smol Training Playbook
We cover the full LLM cycle: designing ablations, choosing an architecture, curating data, post-training, and building solid infrastructure.
We'll help you navigate the messy training reality that LLM papers don't cover. Chapter highlights in the 🧵
Hugging Face (Twitter)
RT @_lewtun: We've just published the Smol Training Playbook: a distillation of hard earned knowledge to share exactly what it takes to train SOTA LLMs ⚡️
Featuring our protagonist SmolLM3, we cover:
🧭 Strategy on whether to train your own LLM and burn all your VC money
🪨 Pretraining, aka turning a mountain of text into a fancy auto-completer
🗿How to sculpt base models with post-training alchemy
🛠️ The underlying infra and how to debug your way out of NCCL purgatory
Highlights from the post-training chapter in the thread 👇
RT @_lewtun: We've just published the Smol Training Playbook: a distillation of hard earned knowledge to share exactly what it takes to train SOTA LLMs ⚡️
Featuring our protagonist SmolLM3, we cover:
🧭 Strategy on whether to train your own LLM and burn all your VC money
🪨 Pretraining, aka turning a mountain of text into a fancy auto-completer
🗿How to sculpt base models with post-training alchemy
🛠️ The underlying infra and how to debug your way out of NCCL purgatory
Highlights from the post-training chapter in the thread 👇
Hugging Face (Twitter)
RT @Nouamanetazi: We're releasing The Smol Training Playbook 📖
Training SmolLM3 on 384 H100s for nearly a month taught us: infrastructure is the unsung hero of LLM training.
Most care about architecture and data, yet few understand the hardware layer. This playbook changes that 🧵
RT @Nouamanetazi: We're releasing The Smol Training Playbook 📖
Training SmolLM3 on 384 H100s for nearly a month taught us: infrastructure is the unsung hero of LLM training.
Most care about architecture and data, yet few understand the hardware layer. This playbook changes that 🧵
Hugging Face (Twitter)
RT @eliebakouch: Training LLMs end to end is hard. Very excited to share our new blog (book?) that cover the full pipeline: pre-training, post-training and infra. 200+ pages of what worked, what didn’t, and how to make it run reliably
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook
RT @eliebakouch: Training LLMs end to end is hard. Very excited to share our new blog (book?) that cover the full pipeline: pre-training, post-training and infra. 200+ pages of what worked, what didn’t, and how to make it run reliably
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook
Hugging Face (Twitter)
RT @Kimi_Moonshot: Kimi Linear Tech Report is dropped! 🚀
https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
Kimi Linear: A novel architecture that outperforms full attention with faster speeds and better performance—ready to serve as a drop-in replacement for full attention, featuring our open-sourced KDA kernels! Kimi Linear offers up to a 75% reduction in KV cache usage and up to 6x decoding throughput at a 1M context length.
Key highlights:
🔹 Kimi Delta Attention: A hardware-efficient linear attention mechanism that refines the gated delta rule.
🔹 Kimi Linear Architecture: The first hybrid linear architecture to surpass pure full attention quality across the board.
🔹 Empirical Validation: Scaled, fair comparisons + open-sourced KDA kernels, vLLM integration, and checkpoints.
The future of agentic-oriented attention is here! 💡
RT @Kimi_Moonshot: Kimi Linear Tech Report is dropped! 🚀
https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
Kimi Linear: A novel architecture that outperforms full attention with faster speeds and better performance—ready to serve as a drop-in replacement for full attention, featuring our open-sourced KDA kernels! Kimi Linear offers up to a 75% reduction in KV cache usage and up to 6x decoding throughput at a 1M context length.
Key highlights:
🔹 Kimi Delta Attention: A hardware-efficient linear attention mechanism that refines the gated delta rule.
🔹 Kimi Linear Architecture: The first hybrid linear architecture to surpass pure full attention quality across the board.
🔹 Empirical Validation: Scaled, fair comparisons + open-sourced KDA kernels, vLLM integration, and checkpoints.
The future of agentic-oriented attention is here! 💡
X (formerly Twitter)
Kimi.ai (@Kimi_Moonshot) on X
Kimi Linear Tech Report is dropped! 🚀
https://t.co/LwNB2sQnzM
Kimi Linear: A novel architecture that outperforms full attention with faster speeds and better performance—ready to serve as a drop-in replacement for full attention, featuring our open-sourced…
https://t.co/LwNB2sQnzM
Kimi Linear: A novel architecture that outperforms full attention with faster speeds and better performance—ready to serve as a drop-in replacement for full attention, featuring our open-sourced…
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ClementDelangue: Happy Halloween from Reachy Mini! You'll be able to 3D print these skins at home thanks to open-source
RT @ClementDelangue: Happy Halloween from Reachy Mini! You'll be able to 3D print these skins at home thanks to open-source
Hugging Face (Twitter)
RT @calebfahlgren: You know Hugging Face put out a banger blog post (book) when you see this https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @calebfahlgren: You know Hugging Face put out a banger blog post (book) when you see this https://twitter.com/eliebakouch/status/1983930328751153159#m
Hugging Face (Twitter)
RT @srush_nlp: The work Hugging Face does continues to be incredible. Putting in serious effort to make these topics accessible and detailed.
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook#introduction
RT @srush_nlp: The work Hugging Face does continues to be incredible. Putting in serious effort to make these topics accessible and detailed.
https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook#introduction
Hugging Face (Twitter)
RT @alexinexxx: thank god i’m unemployed so i can take a break from learning cuda & just read this banger hehe https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @alexinexxx: thank god i’m unemployed so i can take a break from learning cuda & just read this banger hehe https://twitter.com/eliebakouch/status/1983930328751153159#m
Hugging Face (Twitter)
RT @Hesamation: holy shit... Hugging Face cooked again! 🔥
they just dropped a free blog (BOOK) that covers the no-bs reality of building SOTA models. i haven't seen any lab/researcher go into the real decisions behind the LLM research and its nuances. this is literally a gem.
Syllabus:
→ Training compass: why → what → how
→ Every big model starts with a small ablation
→ Designing the model architecture
→ The art of data curation
→ The training marathon
→ Beyond base models — post-training in 2025
→ Infrastructure - the unsung hero
skimming through the blog, this is incredibly detailed just like their ultrascale playbook. i'm gonna read this and share more about it in the coming days.
Read here: https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook
RT @Hesamation: holy shit... Hugging Face cooked again! 🔥
they just dropped a free blog (BOOK) that covers the no-bs reality of building SOTA models. i haven't seen any lab/researcher go into the real decisions behind the LLM research and its nuances. this is literally a gem.
Syllabus:
→ Training compass: why → what → how
→ Every big model starts with a small ablation
→ Designing the model architecture
→ The art of data curation
→ The training marathon
→ Beyond base models — post-training in 2025
→ Infrastructure - the unsung hero
skimming through the blog, this is incredibly detailed just like their ultrascale playbook. i'm gonna read this and share more about it in the coming days.
Read here: https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @RisingSayak: With simple changes, I was able to cut down @krea_ai's new real-time video gen's timing from 25.54s to 18.14s 🔥🚀
1. FA3 through `kernels`
2. Regional compilation
3. Selective (FP8) quantization
Notes are in 🧵 below
RT @RisingSayak: With simple changes, I was able to cut down @krea_ai's new real-time video gen's timing from 25.54s to 18.14s 🔥🚀
1. FA3 through `kernels`
2. Regional compilation
3. Selective (FP8) quantization
Notes are in 🧵 below
Hugging Face (Twitter)
RT @Thom_Wolf: We’ve cooked another one of these 200+ pages practical books on model training that we love to write.
This time it’s on all pretraining and post-training recipes and how to do a training project hyper parameter exploration.
Closing the trilogy of:
1. Building a pretraining dataset with the « FineWeb blog post »
2. Scaling infra GPU cluster with the « Ultrascale Playbook »
3. And now all the training recipes and HP exploration for pre- and post-training with this « Smol Training Playbook »
The HF science team on fire https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @Thom_Wolf: We’ve cooked another one of these 200+ pages practical books on model training that we love to write.
This time it’s on all pretraining and post-training recipes and how to do a training project hyper parameter exploration.
Closing the trilogy of:
1. Building a pretraining dataset with the « FineWeb blog post »
2. Scaling infra GPU cluster with the « Ultrascale Playbook »
3. And now all the training recipes and HP exploration for pre- and post-training with this « Smol Training Playbook »
The HF science team on fire https://twitter.com/eliebakouch/status/1983930328751153159#m
Hugging Face (Twitter)
RT @Yampeleg: hf are doing god’s work fr https://twitter.com/_lewtun/status/1983929588909797414#m
RT @Yampeleg: hf are doing god’s work fr https://twitter.com/_lewtun/status/1983929588909797414#m
Hugging Face (Twitter)
RT @novasarc01: many people have asked me about how to keep up with frontier research and new models. this is one of the best gold resource to start with. covers pre-training, post-training, infra, architecture nuances and recent advances. huge respect to the hf team for putting it together. https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @novasarc01: many people have asked me about how to keep up with frontier research and new models. this is one of the best gold resource to start with. covers pre-training, post-training, infra, architecture nuances and recent advances. huge respect to the hf team for putting it together. https://twitter.com/eliebakouch/status/1983930328751153159#m
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)
RT @ahadj0: finally got around to implementing the software for teleoping the detachable gripper on top of @LeRobotHF
going to release open source files for it soon
RT @ahadj0: finally got around to implementing the software for teleoping the detachable gripper on top of @LeRobotHF
going to release open source files for it soon
Hugging Face (Twitter)
RT @ludwigABAP: if @huggingface released a mega book with all 4 massive essays they wrote on training LLMs (ultra scale playbook, eval guidebook, smol training playbook), I’d buy it for an exorbitant amount just because it’s the closest to what most of us have been interested in and/or doing for the last few years
RT @ludwigABAP: if @huggingface released a mega book with all 4 massive essays they wrote on training LLMs (ultra scale playbook, eval guidebook, smol training playbook), I’d buy it for an exorbitant amount just because it’s the closest to what most of us have been interested in and/or doing for the last few years
Hugging Face (Twitter)
RT @yacinelearning: I quit my job so I can have enough time to read this book btw https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @yacinelearning: I quit my job so I can have enough time to read this book btw https://twitter.com/eliebakouch/status/1983930328751153159#m
Hugging Face (Twitter)
RT @TheAhmadOsman: yesterday, Hugging Face dropped a 214-page MASTERCLASS on how to train LLMs
> it’s called The Smol Training Playbook
> and if want to learn how to train LLMs,
> this GIFT is for you
> this training bible walks you through the ENTIRE pipeline
> covers every concept that matters from why you train,
> to what you train, to how you actually pull it off
> from pre-training, to mid-training, to post-training
> it turns vague buzzwords into step-by-step decisions
> architecture, tokenization, data strategy, and infra
> highlights the real-world gotchas
> instabilities, scaling headaches, debugging nightmares
> distills lessons from building actual
> state-of-the-art LLMs, not just toy models
how modern transformer models are actually built
> tokenization: the secret foundation of every LLM
> tokenizer fundamentals
> vocabulary size
> byte pair encoding
> custom vs existing tokenizers
> all the modern attention mechanisms are here
>...
Перейти на оригинальный пост
RT @TheAhmadOsman: yesterday, Hugging Face dropped a 214-page MASTERCLASS on how to train LLMs
> it’s called The Smol Training Playbook
> and if want to learn how to train LLMs,
> this GIFT is for you
> this training bible walks you through the ENTIRE pipeline
> covers every concept that matters from why you train,
> to what you train, to how you actually pull it off
> from pre-training, to mid-training, to post-training
> it turns vague buzzwords into step-by-step decisions
> architecture, tokenization, data strategy, and infra
> highlights the real-world gotchas
> instabilities, scaling headaches, debugging nightmares
> distills lessons from building actual
> state-of-the-art LLMs, not just toy models
how modern transformer models are actually built
> tokenization: the secret foundation of every LLM
> tokenizer fundamentals
> vocabulary size
> byte pair encoding
> custom vs existing tokenizers
> all the modern attention mechanisms are here
>...
Перейти на оригинальный пост
Hugging Face (Twitter)
RT @_xjdr: What an incredible resource. Anyone interested in pretrainig should read this carefully https://twitter.com/eliebakouch/status/1983930328751153159#m
RT @_xjdr: What an incredible resource. Anyone interested in pretrainig should read this carefully https://twitter.com/eliebakouch/status/1983930328751153159#m