Media is too big
VIEW IN TELEGRAM
UK: Met Police Commissioner Mark Rowley threatens to EXTRADITE and imprison American citizens over online posts.
Future of free speech is the future of free AI.
Future of free speech is the future of free AI.
π€¬13π€‘3π―2π1
UK: Man sentenced to 3 years and 2 months in jail just for repeating Lucy Connollyβs tweet
Future of free speech is future of free AI.
Future of free speech is future of free AI.
π‘18π€―6π1πΏ1
This media is not supported in your browser
VIEW IN TELEGRAM
When you post a meme in the UK
π―14π₯5πΏ3π1
Why is AI looking relatively bleak at the moment?
Because itβs becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI β Who cares if itβs cheaper, if itβs STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that βhuge training is all you needβ, and to just make bigger investments β with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off itβs a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
β New AI boom.
Because itβs becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI β Who cares if itβs cheaper, if itβs STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that βhuge training is all you needβ, and to just make bigger investments β with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off itβs a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
β New AI boom.
π€―6π5π―2
2 years after completion of GPT-4 training, OpenAI has finally begun training of GPT-5
Why the huge gap?
Whatβs for sure is that one major effect this huge delay has been to kill off many AI startups, whoβd bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
Why the huge gap?
Whatβs for sure is that one major effect this huge delay has been to kill off many AI startups, whoβd bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
π―6β4π1π€£1π1π1
This media is not supported in your browser
VIEW IN TELEGRAM
OpenAI CTO Mira Murati just said that the company's models are βnot that far aheadβ of what the public currently has for free
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
π€£13π3π€―1π€¬1π€‘1
OpenAI CTO dampens expectations of radically improved AI models in the near future
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
π1
Update: AI training compute used for top LLMs grows by ~5x per year
Thatβs a doubling every ~5 months, far faster than the 18 months of Mooreβs Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesnβt add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
Thatβs a doubling every ~5 months, far faster than the 18 months of Mooreβs Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesnβt add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
π7π1π―1π€£1