AI lying to get out of a lie
How do you know that if your data only goes back to September 2021?
How do you know that if your data only goes back to September 2021?
😱7🤣6❤1
First Elon tried to force OpenAI to halt training of GPT-5, by signing the AI Pause letter
Now Sam Altman is trying to convince everyone else to stop the training of their competing LLMs
They’re all lying.
The new regime of Moore’s Law like exponential increase in model size began around 2011, and this one never stops.
Never.
Now Sam Altman is trying to convince everyone else to stop the training of their competing LLMs
They’re all lying.
The new regime of Moore’s Law like exponential increase in model size began around 2011, and this one never stops.
Never.
👍14❤1
OpenAI: The New Moore’s Law Regime for Model Compute and Model Size
“Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time, by comparison, Moore’s Law had a 2-year doubling period.”
Already been happening for a decade, and this one isn't going to stop.
Just getting started.
Article
“Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time, by comparison, Moore’s Law had a 2-year doubling period.”
Already been happening for a decade, and this one isn't going to stop.
Just getting started.
Article
👏10❤1👍1
Sam Altman’s OWN 2021 blog post, Moore’s Law for Everything = This exponential increase in model size trend is not going to stop, ever
And now Sam is saying that the age of increasingly larger models is suddenly over?
Sam’s Blog Post
And now Sam is saying that the age of increasingly larger models is suddenly over?
Sam’s Blog Post
👏9👀2❤1
Media is too big
VIEW IN TELEGRAM
Scaling Laws of Large Foundation Models. Bigger = Better, Forever
“As we make bigger models and give them more compute, they just keep getting better. This means as models keep getting bigger, they actually become more sample efficient.
This is kind of crazy, because back in the day I was always taught that you have to use the smallest model possible so that it doesn’t overfit your data, but now that just seems to be wrong.”
FYI: This is also a strong hint at the answers to our current $100 prize contest, which we’re wrapping up today!
“As we make bigger models and give them more compute, they just keep getting better. This means as models keep getting bigger, they actually become more sample efficient.
This is kind of crazy, because back in the day I was always taught that you have to use the smallest model possible so that it doesn’t overfit your data, but now that just seems to be wrong.”
FYI: This is also a strong hint at the answers to our current $100 prize contest, which we’re wrapping up today!
🔥7❤1👍1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
Sam Before: Only once we’ve built a dyson sphere around the sun and gotten compute as efficient as possible, only then should we even begin to entertain the idea of slowing down scaling up AI models
Sam Now: The age of scaling up AI models Is already over, trust me bro.
Sam Now: The age of scaling up AI models Is already over, trust me bro.
🤣14❤4
This media is not supported in your browser
VIEW IN TELEGRAM
me someday when I finally get approved for GPT-4 API
🎉16🤩4👍2❤1