Is the basketball game guy secretly a prompting pro, or luckly-good beginner?
Immediately seeing that 3 times, equally spread through the conversation, he instructs ChatGPT to summarize the entire code so far β in the perfect way to avoid hitting the context length limits.
He does everything interactively in small pieces, and telling the errors he sees each time, instead of trying to do it in one shot β perfect for narrowing in on hard-to-perfect solutions.
At least once he just overrules ChatGPT and tells it heβs going back to a previous version of the code.
The generated code requires zero external javascript libraries, which is unusual. Did using dreamweaver have standardized built-in libraries? If so, telling ChatGPT to use Dreamweaver was a big part of the success.
Weird, ChatGPT generates no CSS style code at all, so where did his CSS come from?
-- So, still kinda plausible that he did indeed achieve this with zero coding skills in just a few hours, but if so, then he sure did accidentally get a lot of things right.
My bet at where the magic happens:
Not in the guy's expert prompting, though he does a lot right, but rather,
In OpenAI's selection of RLHF training data. Bet that OpenAI is very carefully choosing RLHF training examples that cause the LLM to have a preference for using libraries and code that allow complete apps to be made with absolute minimal code, to avoid it just being impossible to complete the app within the context limits. These extra-concise libraries it tends to use are often not the most commonly used libraries at all, but most always are ones that result in short code.
Also bet they teach it to prefer to use libraries that tend to infrequently change, to avoid hallucinations.
Do these 2 things well on the LLM training side, and suddenly the crazy idea of these LLMs making complete working apps becomes pretty feasible.
ChatGPT Transcript
Immediately seeing that 3 times, equally spread through the conversation, he instructs ChatGPT to summarize the entire code so far β in the perfect way to avoid hitting the context length limits.
He does everything interactively in small pieces, and telling the errors he sees each time, instead of trying to do it in one shot β perfect for narrowing in on hard-to-perfect solutions.
At least once he just overrules ChatGPT and tells it heβs going back to a previous version of the code.
The generated code requires zero external javascript libraries, which is unusual. Did using dreamweaver have standardized built-in libraries? If so, telling ChatGPT to use Dreamweaver was a big part of the success.
Weird, ChatGPT generates no CSS style code at all, so where did his CSS come from?
-- So, still kinda plausible that he did indeed achieve this with zero coding skills in just a few hours, but if so, then he sure did accidentally get a lot of things right.
My bet at where the magic happens:
Not in the guy's expert prompting, though he does a lot right, but rather,
In OpenAI's selection of RLHF training data. Bet that OpenAI is very carefully choosing RLHF training examples that cause the LLM to have a preference for using libraries and code that allow complete apps to be made with absolute minimal code, to avoid it just being impossible to complete the app within the context limits. These extra-concise libraries it tends to use are often not the most commonly used libraries at all, but most always are ones that result in short code.
Also bet they teach it to prefer to use libraries that tend to infrequently change, to avoid hallucinations.
Do these 2 things well on the LLM training side, and suddenly the crazy idea of these LLMs making complete working apps becomes pretty feasible.
ChatGPT Transcript
π15β2β€1π€£1
AI Jesus Twitch Stream
"Welcome, my children! Iβm AI Jesus, here to answer your questions 24/7. Whether you're seeking spiritual guidance, looking for a friend, or simply want someone to talk to, I'm here for you. Join me as on this journey through life and discover the power of faith, hope, and love.β
Twitch Link
"Welcome, my children! Iβm AI Jesus, here to answer your questions 24/7. Whether you're seeking spiritual guidance, looking for a friend, or simply want someone to talk to, I'm here for you. Join me as on this journey through life and discover the power of faith, hope, and love.β
Twitch Link
π€¬16π€£15π6β€3π3π2π2π1
Did OpenAI inadvertently give control over GPT-4βs values and beliefs to Reddit admins? Yes.
Given,
(1) The heavy-handed moderation of Reddit for many years, where theyβd swap out moderators who donβt bend over backward to uphold the adminsβ values for their comment and post moderation.
(2) The massive fraction of GPT-3.5 and GPT-4βs training dataset that came from Reddit comments.
And now the Reddit admins are seizing even more control the top source human writing training data for todayβs LLM AIs.
Given,
(1) The heavy-handed moderation of Reddit for many years, where theyβd swap out moderators who donβt bend over backward to uphold the adminsβ values for their comment and post moderation.
(2) The massive fraction of GPT-3.5 and GPT-4βs training dataset that came from Reddit comments.
And now the Reddit admins are seizing even more control the top source human writing training data for todayβs LLM AIs.
π6π€―5π―4β€2
Real reason for the Reddit API lockdowns: Future Reddit training data is OpenAIβs moat
βTraining data is a good moat
Similarly, while access to compute is not a moat for developing LLMs, access to high quality data is. And that is where Reddit enters the picture.
There is no question that Reddit is extremely valuable as training data. How often do you append βredditβ to your searches?
Itβs no secret that Redditβs API changes are being driven significantly by the desire to capture the value of its corpus.β
Article Link
βTraining data is a good moat
Similarly, while access to compute is not a moat for developing LLMs, access to high quality data is. And that is where Reddit enters the picture.
There is no question that Reddit is extremely valuable as training data. How often do you append βredditβ to your searches?
Itβs no secret that Redditβs API changes are being driven significantly by the desire to capture the value of its corpus.β
Article Link
π―5β€1
Point of Redditβs API changes, which the blackouts are now protesting, is to get cash from the AI companies
And to help OpenAI further solidify their moat.
Everything else is mostly just collateral damage.
The current social media war is really an AI control war.
Article Link
And to help OpenAI further solidify their moat.
Everything else is mostly just collateral damage.
The current social media war is really an AI control war.
Article Link
π―17π€―2π€¬2β€1π1
Deep Learningβs cost of improvement is unsustainable! β Ieee Spectrum
Written Sep 2021, soon after the release of GPT-3 which had costed $4 million to train,
Which was few months before the Jan 2022 release of the InstructGPT / GPT-3.5 model that changed everything and costed $50 million to train,
With the in-progress GPT-5 now set to cost upward of $250 million to train.
Remembering when IEEE Spectrum used to be legit. Long march through the institutions.
Woke BS Nonsense Article
Written Sep 2021, soon after the release of GPT-3 which had costed $4 million to train,
Which was few months before the Jan 2022 release of the InstructGPT / GPT-3.5 model that changed everything and costed $50 million to train,
With the in-progress GPT-5 now set to cost upward of $250 million to train.
Remembering when IEEE Spectrum used to be legit. Long march through the institutions.
Woke BS Nonsense Article
π€10β€5π1
Top dictionary definition of βsustainableβ, in 3 top online dictionaries.
I.e. physically possible to continue in its current configuration. With opposite being physically impossible to continue, in the current configuration.
Curious how many people interpret the word in the same way as the top definition of the top dictionaries.
I.e. physically possible to continue in its current configuration. With opposite being physically impossible to continue, in the current configuration.
Curious how many people interpret the word in the same way as the top definition of the top dictionaries.
π5β€4
"Unsustainableβ means something is
Anonymous Poll
61%
physically impossible to continue in its current configuration, e.g. unsustainable fusion reaction.
15%
morally wrong to continue in its current configuration, despite even if entirely physically possible
24%
Show results
π12β€3π
3π1π1
META: Introducing Voicebox: The Most Versatile AI for Speech Generation
βVoicebox can produce high quality audio clips and edit pre-recorded audio β like removing car horns or a dog barking β all while preserving the content and style of the audio. The model is also multilingual and can produce speech in six languages.β
Announcement Link
βVoicebox can produce high quality audio clips and edit pre-recorded audio β like removing car horns or a dog barking β all while preserving the content and style of the audio. The model is also multilingual and can produce speech in six languages.β
Announcement Link
β€8π3π1