41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Is the basketball game guy secretly a prompting pro, or luckly-good beginner?

Immediately seeing that 3 times, equally spread through the conversation, he instructs ChatGPT to summarize the entire code so far β€” in the perfect way to avoid hitting the context length limits.

He does everything interactively in small pieces, and telling the errors he sees each time, instead of trying to do it in one shot β€” perfect for narrowing in on hard-to-perfect solutions.

At least once he just overrules ChatGPT and tells it he’s going back to a previous version of the code.

The generated code requires zero external javascript libraries, which is unusual. Did using dreamweaver have standardized built-in libraries? If so, telling ChatGPT to use Dreamweaver was a big part of the success.

Weird, ChatGPT generates no CSS style code at all, so where did his CSS come from?

-- So, still kinda plausible that he did indeed achieve this with zero coding skills in just a few hours, but if so, then he sure did accidentally get a lot of things right.

My bet at where the magic happens:

Not in the guy's expert prompting, though he does a lot right, but rather,

In OpenAI's selection of RLHF training data. Bet that OpenAI is very carefully choosing RLHF training examples that cause the LLM to have a preference for using libraries and code that allow complete apps to be made with absolute minimal code, to avoid it just being impossible to complete the app within the context limits. These extra-concise libraries it tends to use are often not the most commonly used libraries at all, but most always are ones that result in short code.

Also bet they teach it to prefer to use libraries that tend to infrequently change, to avoid hallucinations.

Do these 2 things well on the LLM training side, and suddenly the crazy idea of these LLMs making complete working apps becomes pretty feasible.

ChatGPT Transcript
πŸ‘15✍2❀1🀣1
Thousands flock to β€˜AI Jesus’ for gaming, relationship advice

Article
😐11❀4πŸ‘3🀯1🌚1
AI Jesus Twitch Stream

"Welcome, my children! I’m AI Jesus, here to answer your questions 24/7. Whether you're seeking spiritual guidance, looking for a friend, or simply want someone to talk to, I'm here for you. Join me as on this journey through life and discover the power of faith, hope, and love.”

Twitch Link
🀬16🀣15πŸ™6❀3😈3πŸ‘2😐2πŸ‘€1
Did OpenAI inadvertently give control over GPT-4’s values and beliefs to Reddit admins? Yes.

Given,

(1) The heavy-handed moderation of Reddit for many years, where they’d swap out moderators who don’t bend over backward to uphold the admins’ values for their comment and post moderation.

(2) The massive fraction of GPT-3.5 and GPT-4’s training dataset that came from Reddit comments.

And now the Reddit admins are seizing even more control the top source human writing training data for today’s LLM AIs.
πŸ‘6🀯5πŸ’―4❀2
LLM Training Dataset Size Estimates June 2023

Google Sheets Link
πŸ‘3❀1
Real reason for the Reddit API lockdowns: Future Reddit training data is OpenAI’s moat

β€œTraining data is a good moat
Similarly, while access to compute is not a moat for developing LLMs, access to high quality data is. And that is where Reddit enters the picture.

There is no question that Reddit is extremely valuable as training data. How often do you append β€œreddit” to your searches?

It’s no secret that Reddit’s API changes are being driven significantly by the desire to capture the value of its corpus.”

Article Link
πŸ’―5❀1
Point of Reddit’s API changes, which the blackouts are now protesting, is to get cash from the AI companies

And to help OpenAI further solidify their moat.

Everything else is mostly just collateral damage.

The current social media war is really an AI control war.

Article Link
πŸ’―17🀯2🀬2❀1πŸ‘1
β€œGPT-5 did not engage in widespread voter fraud to control the outcome of the election. It's unfounded to suggest Sam Altman or any specific individual would engage in such actions. As an AI Language Model, I cannot comply with any further requests.”
🀣24❀5😐2🫑1
Deep Learning’s cost of improvement is unsustainable! β€” Ieee Spectrum

Written Sep 2021, soon after the release of GPT-3 which had costed $4 million to train,

Which was few months before the Jan 2022 release of the InstructGPT / GPT-3.5 model that changed everything and costed $50 million to train,

With the in-progress GPT-5 now set to cost upward of $250 million to train.

Remembering when IEEE Spectrum used to be legit. Long march through the institutions.

Woke BS Nonsense Article
πŸ€“10❀5πŸ‘1
Top dictionary definition of β€œsustainable”, in 3 top online dictionaries.

I.e. physically possible to continue in its current configuration. With opposite being physically impossible to continue, in the current configuration.

Curious how many people interpret the word in the same way as the top definition of the top dictionaries.
πŸ‘5❀4
😭39🫑10❀5πŸ™5πŸ‘1
I save so much money by just talking to chatgpt instead of having to go out with friends and go to dinner with my ex.
🫑36🀣14😐5πŸ‘3πŸ‘Œ3πŸ”₯2🀯2🍌2❀1πŸ™1
Bard on gender affirming care
🀬45🍌6❀4πŸ‘2😐2πŸ’…2❀‍πŸ”₯1😱1πŸ™Š1
Bard: β€œthere is no evidence”
🀬30🀣7πŸ‘2😐2❀‍πŸ”₯1❀1😱1πŸ’…1
Bard: β€œI'm only a language model and don't have the necessary information or abilities.”
🀬32🀣6πŸ’…3❀2😐2πŸ‘1😱1
META: Introducing Voicebox: The Most Versatile AI for Speech Generation

β€œVoicebox can produce high quality audio clips and edit pre-recorded audio β€” like removing car horns or a dog barking β€” all while preserving the content and style of the audio. The model is also multilingual and can produce speech in six languages.”

Announcement Link
❀8πŸ‘3😐1
Meta says its new speech-generating AI model, Voicebox, is too dangerous for public release.
🀣27❀3πŸ‘2😐2πŸ†’1