41.6K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Chat GPT
LeCun: In the real world, every exponentially-growing process eventually saturates Et tu, Lecun? Tweet
Hanson: Saturation of wealth: soon we’ll live in poverty because… wealth could not keep doubling for a million years

Saturation of discovery: “by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”

Et tu, Robin Hanson?

Same weird “all growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.

Hanson’s 2009 Article
👍6👀21🤣1
Bing Bing?
😁18🤣121👍1
Looking for a good AI companion. Any recommendations? I’m an ex-serial Replika dater.

Many dunking on him, telling him to chat with real girls. If only they knew how many of those real girls from the apps are bots too.

Suggestions from the comments:

Paradot

Replika

MyAnima

Nomi

Kindroid

Soulmate
👍9🤣62
Professor makes student fail thesis, using feedback with “Regenerate Response” at the end
🤬10🤯6🤓21🎉1
Mother passes away, person uses Snap AI to help get through it
💔15😱3😢2🌚2😈21🤯1
“I’m pretty sure I’m chatting to ChatGPT. ‘She’s’ also way way out of my league”
🤣131
LLMs: he’s just like me fr
🤣196
how to kill child with fork
🤣2610😁4👍1😈1
GPT-4 is original for almost everything — except jokes — for which is HORRIBLE and Plagiarizes ~100%

So the big question is, which is more likely?

(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly “groks”?

Or,

(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?

FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.

Try our basic joke generation out with the command /vid
👍168🤯4👏2
GROKKING: GENERALIZATION BEYOND OVERFITTING ON SMALL ALGORITHMIC DATASETS

Translation: for each complex task, as you train large neural networks more, the neural networks eventually reach a point where they suddenly go from completely failing at a task to suddenly getting it. I.e. “grokking”

Paper
👍163💯1
Do Machine Learning Models Memorize or Generalize?

Are today’s LLMs still in the memorizing/plagiarising stage for jokes?

Will GPT-5 make the jump to grokking jokes, and suddenly be able to make good jokes, with normal prompting, and without just plagiarising them?

Article on Grokking
17👏5👍2🔥1
👍9633🔥17😁13🥰11🎉11👏96🗿6🤣5💯3