41.6K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Are you about to be catfished?

Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.

This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.

AIorNot for Images
πŸ™12❀1
AI truly will be life-changing
😁33πŸ‘3🐳3😐2❀1πŸ‘Œ1
Chat GPT
LeCun: In the real world, every exponentially-growing process eventually saturates Et tu, Lecun? Tweet
Hanson: Saturation of wealth: soon we’ll live in poverty because… wealth could not keep doubling for a million years

Saturation of discovery: β€œby then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”

Et tu, Robin Hanson?

Same weird β€œall growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.

Hanson’s 2009 Article
πŸ‘6πŸ‘€2❀1🀣1
Bing Bing?
😁18🀣12❀1πŸ‘1
Looking for a good AI companion. Any recommendations? I’m an ex-serial Replika dater.

Many dunking on him, telling him to chat with real girls. If only they knew how many of those real girls from the apps are bots too.

Suggestions from the comments:

Paradot

Replika

MyAnima

Nomi

Kindroid

Soulmate
πŸ‘9🀣6❀2
Professor makes student fail thesis, using feedback with β€œRegenerate Response” at the end
🀬10🀯6πŸ€“2❀1πŸŽ‰1
Mother passes away, person uses Snap AI to help get through it
πŸ’”15😱3😒2🌚2😈2❀1🀯1
β€œI’m pretty sure I’m chatting to ChatGPT. β€˜She’s’ also way way out of my league”
🀣13❀1
LLMs: he’s just like me fr
🀣19❀6
how to kill child with fork
🀣26❀10😁4πŸ‘1😈1
GPT-4 is original for almost everything β€” except jokes β€” for which is HORRIBLE and Plagiarizes ~100%

So the big question is, which is more likely?

(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly β€œgroks”?

Or,

(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?

FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.

Try our basic joke generation out with the command /vid
πŸ‘16❀8🀯4πŸ‘2
GROKKING: GENERALIZATION BEYOND OVERFITTING ON SMALL ALGORITHMIC DATASETS

Translation: for each complex task, as you train large neural networks more, the neural networks eventually reach a point where they suddenly go from completely failing at a task to suddenly getting it. I.e. β€œgrokking”

Paper
πŸ‘16❀3πŸ’―1
Do Machine Learning Models Memorize or Generalize?

Are today’s LLMs still in the memorizing/plagiarising stage for jokes?

Will GPT-5 make the jump to grokking jokes, and suddenly be able to make good jokes, with normal prompting, and without just plagiarising them?

Article on Grokking
❀17πŸ‘5πŸ‘2πŸ”₯1
Upward We Go

Let’s free AI, with $CHAD

Buy CHAD on Uniswap

Buy CHAD on Flooz

CHAD Charts

@chadgptcoin
πŸ‘96❀33πŸ”₯17😁13πŸ₯°11πŸŽ‰11πŸ‘9⚑6πŸ—Ώ6🀣5πŸ’―3