This media is not supported in your browser
VIEW IN TELEGRAM
When you post a meme in the UK
๐ฏ14๐ฅ5๐ฟ3๐1
Why is AI looking relatively bleak at the moment?
Because itโs becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI โ Who cares if itโs cheaper, if itโs STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that โhuge training is all you needโ, and to just make bigger investments โ with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off itโs a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
โ New AI boom.
Because itโs becoming increasingly apparent that no one actually put the resources into even beginning to train a real GPT-4 successor, over the entire past 2 years.
GPT-4 completed training in August 2022, 2 years ago, BEFORE ChatGPT was even launched.
And, at this point, having a higher-IQ model is ALL that matters, for AI having the competance and self-sufficiency needed to do more real work, on the level of what humans get paid to do today.
What has Altman spend billions on instead, if not creating a GPT-5?
(1) Garbage high-efficiency models that no one really wants, except the cost-cutters at OpenAI โ Who cares if itโs cheaper, if itโs STILL not quite smart enough to do the jobs we need done. Make it work at all before you make it cheap.
(2) Rumor has it that Altman has been trying to evolve OpenAI into a hardware company, much like nVidia, instead of using their giant pile of cash to plow ahead with creating GPT-5.
2 years, in tech, is a massive amount of time to waste.
This is not hugely surprising though, since top AI pros have been screaming that โhuge training is all you needโ, and to just make bigger investments โ with the industry just refusing to do this, since at least 2011.
Unfortunately, the industry relapsed right back into these old behaviors, of trying to be cheap and redirect as much money as possible into salaries instead of training costs, in the past 2 years.
Prediction: When the industry finally DOES get off itโs a$$ and build a real GPT-4 successor, a real 10x smarter GPT-5,
โ New AI boom.
๐คฏ6๐5๐ฏ2
2 years after completion of GPT-4 training, OpenAI has finally begun training of GPT-5
Why the huge gap?
Whatโs for sure is that one major effect this huge delay has been to kill off many AI startups, whoโd bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
Why the huge gap?
Whatโs for sure is that one major effect this huge delay has been to kill off many AI startups, whoโd bet on AI capabilities quickly advancing, and essentially bet on OpenAI continuing training of GPT-5 immediately after GPT-4.
Those upcoming AI startups are exactly where any future OpenAI killer could have been expected to be lurking.
The huge delay has undoubtedly lead to the death of countless AI startups.
I.e.
Did OpenAI delay the start of GPT-5 training by almost 2 years, as a strategic move, to kill off most of their competition?
The purpose of a system is what it does.
OpenAI Announcement
๐ฏ6โ4๐1๐คฃ1๐1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
OpenAI CTO Mira Murati just said that the company's models are โnot that far aheadโ of what the public currently has for free
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
OpenAI does not have some GPT-5 level model waiting in the wings.
Why did OpenAI totally blow the huge lead they had on the industry, giving everyone else time to catch up to GPT-4?
๐คฃ13๐3๐คฏ1๐คฌ1๐คก1
OpenAI CTO dampens expectations of radically improved AI models in the near future
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
But at the same time, no other AI giants seem to be picking up the torch, for training of a next-generation foundation model that will likely cost $250M or more.
Who will be the first to launch a 10x better GPT-5 level model?
Article
๐1
Update: AI training compute used for top LLMs grows by ~5x per year
Thatโs a doubling every ~5 months, far faster than the 18 months of Mooreโs Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesnโt add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
Thatโs a doubling every ~5 months, far faster than the 18 months of Mooreโs Law.
Notice that this claims that Gemini Ultra used more compute than GPT-4,
but Gemini Ultra is reportedly far worse than GPT-4 at coding. Something doesnโt add up.
Conclusion:
Massive increases in raw LLM ability are coming, far beyond GPT-4 ability, but not quite yet.
Website
๐7๐1๐ฏ1๐คฃ1
ChatGPT can accurately estimate height from photos
โThe girls are using ChatGPT to see if men are lying about their height on dating appsโ
โUpload 4 pictures, it uses proportions and surroundings to estimate height.โ
โI tested it on 10 friends & family members - all estimates were within 1 inch of their real heightโ
Article
โThe girls are using ChatGPT to see if men are lying about their height on dating appsโ
โUpload 4 pictures, it uses proportions and surroundings to estimate height.โ
โI tested it on 10 friends & family members - all estimates were within 1 inch of their real heightโ
Article
๐คฏ22๐คก18๐คฃ10๐5๐3
Token Cost of GPT-4 level models over time
Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months = 240x cheaper.
โ FWIW, none of the cheap ones are quite to the quality of real GPT-4 on coding, the only real job where AI matters right now, and who cares if theyโre cheaper when theyโre not yet quite good enough to really do the jobs they could.
Industry wasted last 2 years making models cheaper rather than pushing forward the state of the art.
Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months = 240x cheaper.
โ FWIW, none of the cheap ones are quite to the quality of real GPT-4 on coding, the only real job where AI matters right now, and who cares if theyโre cheaper when theyโre not yet quite good enough to really do the jobs they could.
Industry wasted last 2 years making models cheaper rather than pushing forward the state of the art.
๐ฏ22๐5๐ฅ2
Tweet
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
๐คฃ36๐คก14๐ฑ5๐4๐คฌ1๐ก1