Why is AI looking relatively bleak at the moment?
Because it’s becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI — Who cares if it’s cheaper, if it’s STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that “huge training is all you need”, and to just make bigger investments — with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off it’s a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
— New AI boom.
Because it’s becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI — Who cares if it’s cheaper, if it’s STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that “huge training is all you need”, and to just make bigger investments — with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off it’s a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
— New AI boom.
🤯6👍5💯2
2 years after completion of GPT-4 training, OpenAI has finally begun training of GPT-5
Why the huge gap?
What’s for sure is that one major effect this huge delay has been to kill off many AI startups, who’d bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
Why the huge gap?
What’s for sure is that one major effect this huge delay has been to kill off many AI startups, who’d bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
💯6✍4👍1🤣1💔1👀1
This media is not supported in your browser
VIEW IN TELEGRAM
OpenAI CTO Mira Murati just said that the company's models are ‘not that far ahead’ of what the public currently has for free
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
🤣13👀3🤯1🤬1🤡1
OpenAI CTO dampens expectations of radically improved AI models in the near future
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
👍1
Update: AI training compute used for top LLMs grows by ~5x per year
That’s a doubling every ~5 months, far faster than the 18 months of Moore’s Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesn’t add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
That’s a doubling every ~5 months, far faster than the 18 months of Moore’s Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesn’t add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
👀7👍1💯1🤣1
ChatGPT can accurately estimate height from photos
“The girls are using ChatGPT to see if men are lying about their height on dating apps”
“Upload 4 pictures, it uses proportions and surroundings to estimate height.”
“I tested it on 10 friends & family members - all estimates were within 1 inch of their real height”
Article
“The girls are using ChatGPT to see if men are lying about their height on dating apps”
“Upload 4 pictures, it uses proportions and surroundings to estimate height.”
“I tested it on 10 friends & family members - all estimates were within 1 inch of their real height”
Article
🤯22🤡18🤣10👍5😇3
Token Cost of GPT-4 level models over time
Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months = 240x cheaper.
— FWIW, none of the cheap ones are quite to the quality of real GPT-4 on coding, the only real job where AI matters right now, and who cares if they’re cheaper when they’re not yet quite good enough to really do the jobs they could.
Industry wasted last 2 years making models cheaper rather than pushing forward the state of the art.
Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months = 240x cheaper.
— FWIW, none of the cheap ones are quite to the quality of real GPT-4 on coding, the only real job where AI matters right now, and who cares if they’re cheaper when they’re not yet quite good enough to really do the jobs they could.
Industry wasted last 2 years making models cheaper rather than pushing forward the state of the art.
💯22👍5🔥2
Tweet
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤣36🤡14😱5👍4🤬1😡1