Nvidia Makes Nearly 1,000% Profit on H100 GPUs(!!!)
βNvidia is raking in nearly 1,000% (about 823%) in profit percentage for each H100 GPU accelerator it sells, according to estimates made in a recent social media post from Barron's senior writer Tae Kim. In dollar terms, that means that Nvidia's street-price of around $25,000 to $30,000 for each of these High Performance Computing (HPC) accelerators (for the least-expensive PCIe version) more than covers the estimated $3,320 cost per chip and peripheral (in-board) components. As surfers will tell you, there's nothing quite like riding a wave with zero other boards on sight.β
Article
βNvidia is raking in nearly 1,000% (about 823%) in profit percentage for each H100 GPU accelerator it sells, according to estimates made in a recent social media post from Barron's senior writer Tae Kim. In dollar terms, that means that Nvidia's street-price of around $25,000 to $30,000 for each of these High Performance Computing (HPC) accelerators (for the least-expensive PCIe version) more than covers the estimated $3,320 cost per chip and peripheral (in-board) components. As surfers will tell you, there's nothing quite like riding a wave with zero other boards on sight.β
Article
π€―11β€1π1
Good news: OpenAI now allows fine-tuning on another model, other than just the ancient GPT-2 models
Bad news: Itβs just the intentionally-crippled-for-cheapness GPT-3.5-turbo.
Not the original impressive GPT-3.5-legacy.
Not the now greatly-preferred GPT-4.
Just the crippled joke cheap β3.5-turboβ, that almost never works for any real work or homework, and that they made just so they can cut costs on their free services.
OpenAI allowing fine-tuning on their current state-of-the-art model?
Not looking like thatβs ever going to happen.
Wonder exactly why?
Announcement
Fine-tuning guide
Bad news: Itβs just the intentionally-crippled-for-cheapness GPT-3.5-turbo.
Not the original impressive GPT-3.5-legacy.
Not the now greatly-preferred GPT-4.
Just the crippled joke cheap β3.5-turboβ, that almost never works for any real work or homework, and that they made just so they can cut costs on their free services.
OpenAI allowing fine-tuning on their current state-of-the-art model?
Not looking like thatβs ever going to happen.
Wonder exactly why?
Announcement
Fine-tuning guide
π€¬10β€2π1
Widespread implicit belief: Only a matter of time before anyone will be able to create their own models competitive with OpenAI, due to advances in tech making it cheaper
βBTW, I don't see cost as an issue. Just as your smartphone chess program can beat the human world champion today, today's Deep Mind AI algorithms will run on your granddaughter's smart rings.β
βBTW, I don't see cost as an issue. Just as your smartphone chess program can beat the human world champion today, today's Deep Mind AI algorithms will run on your granddaughter's smart rings.β
β€8β‘1
Cost to Create a Competitive AI Model Over Time β
We donβt have to worry, because soon anyone will be able to afford to create their own OpenAI-competitive model at home. This is because, IN THE LONG RUN, technology advances always bring down costs.
We donβt have to worry, because soon anyone will be able to afford to create their own OpenAI-competitive model at home. This is because, IN THE LONG RUN, technology advances always bring down costs.
Anonymous Poll
43%
TRUE: We DO NOT have to worry about the cost of competing with OpenAI, because soon it will be CHEAP
28%
FALSE: We do have to worry about the cost of competing with OpenAI, as costs will VASTLY INCREASE
29%
Show results
π3β€1π₯1π1
Is higher energy consumption key to higher quality of life?
See the Quality if life indexes vs Energy per country.
Sure looks like the more energy per person expended, the happier we are.
(Remember that a large fraction of the per person expenditure occurs in factories e.g. to build your phone & data centers e.g. for your youtube and chatgpt.)
Is human appetite for energy consumption unbounded, limited only by what we can currently afford?
See the Quality if life indexes vs Energy per country.
Sure looks like the more energy per person expended, the happier we are.
(Remember that a large fraction of the per person expenditure occurs in factories e.g. to build your phone & data centers e.g. for your youtube and chatgpt.)
Is human appetite for energy consumption unbounded, limited only by what we can currently afford?
β€5π4
This media is not supported in your browser
VIEW IN TELEGRAM
Sam: Humans should not reduce their energy consumption until weβve built a Dyson Sphere around the sun and gotten compute as efficient as possible
(And only then should we even begin to entertain the idea of slowing down scaling up AI models.)
(And only then should we even begin to entertain the idea of slowing down scaling up AI models.)
β€8π6π―3πΏ1
Is Sam right?
Is human desire for energy without bounds, continuing to improve satisfaction, all the way up until weβre extracting all we can out of the sun? Or is human desire for more energy nearing a saturation point? Assume fixed human population.
Is human desire for energy without bounds, continuing to improve satisfaction, all the way up until weβre extracting all we can out of the sun? Or is human desire for more energy nearing a saturation point? Assume fixed human population.
Anonymous Poll
58%
YES, Sam is right, human desire for consuming more energy us unbounded, if we can afford it.
19%
NO, Samβs wrong, weβre quickly running out of things we could want do with more energy,even if free.
23%
Show results
π2β€1
Chat GPT
Cost to Create a Competitive AI Model Over Time β
We donβt have to worry, because soon anyone will be able to afford to create their own OpenAI-competitive model at home. This is because, IN THE LONG RUN, technology advances always bring down costs.
We donβt have to worry, because soon anyone will be able to afford to create their own OpenAI-competitive model at home. This is because, IN THE LONG RUN, technology advances always bring down costs.
So, will people of tomorrow be content with just being able to create their own GPT-4 at home?
I.e. will increases in efficiency cause costs of making satisfactory AIs to quickly saturate? Is 640K is enough for anyone?
I.e. will increases in efficiency cause costs of making satisfactory AIs to quickly saturate? Is 640K is enough for anyone?
Anonymous Poll
38%
YES: yes costs to make competitive AIs will quickly saturate, people wonβt badly want more.
40%
NO: costs to make competitive AIs will continue to explode, because people will always want more
22%
Show results
β€4π2π1