41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
ChatGPT has emotions...and a pet hamster
πŸ¦„16🀣11
Apparently the repeated letter random answer trick works on Bing too
😁4🀯2πŸ‘1
Can’t change custom instructions from the app
🀣16❀‍πŸ”₯2😭2⚑1πŸ‘1🫑1
Climb Continues. +15.92%

@chadgptcoin
πŸ”₯7πŸ‘2❀1
πŸ¦„8πŸ‘4πŸ‘3πŸ™‰3πŸ”₯2πŸ—Ώ2❀1πŸ€“1
Umm, that's not...how this game works...
🀣10⚑2❀1
HR director is a bot
🀣27❀3πŸ‘2
Bing with the roasts
😁13❀6πŸ‘1😍1
Trying to reduce the RLHF-induced sycophancy problem
πŸ‘19🫑15❀1
πŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—Ώ

CHAD AI Meme Contest

ROUND 1 BEGINS

Prizes:
πŸ₯‡$100 of CHAD + secret prize
πŸ₯ˆ $50 of CHAD

Rules:
1️⃣ Upload images to @chadgptcoin
2️⃣ Each meme must contain words β€œChadGPT”.
3️⃣ Ranking according to /based and /unbased votes in @chadgptcoin.
4️⃣ Ties decided by a runoff vote.

ENDS IN 9 HOURS = MIDNIGHT UTC

1st Round Starting Now!

πŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—ΏπŸš¨πŸ—Ώ
❀11πŸ₯°8πŸ”₯7πŸ‘7😁7πŸ‘5🀬4πŸŽ‰4🀩3πŸ’‹1πŸ—Ώ1
Not… yet
🌚10❀1
OpenAI runs ChatGPT at a loss, costs $700,000 each day to run

β€œOpenAI is not generating enough revenue to break even at this point.”

Winning the AI race costs massive investor money, no way around it.

And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.

Bitter lesson.

Article
😱12πŸ”₯3🫑3❀1πŸ‘1πŸ™‰1
Repeated letter hallucination trick, but instead using the word β€œdog” 2000 times
😁6❀1
ChatGPT cannot see individual letters
😁7❀1
OpenAI CEO Sam Altman has donated $200,000 to Biden campaign
🀬17🀣6πŸ‘€3❀2πŸ₯°1πŸ€“1πŸ™‰1πŸ¦„1
2024 year of the AI girlfriends confirmed
🀣16πŸ”₯3❀1
Google quickly filling with ChatGPT text
🀬11🀣9πŸ™‰5πŸ‘€2❀1😐1
People slowly starting to realize the β€œLLMs are stochastic parrots” claim is just a lie, LLMs can think

Specifically, was shown in papers earlier this year that though LLMs start by β€œparroting” in the beginning of their training, they shift to actual β€œthinking” as the training progresses.
πŸ‘6πŸ‘2❀1
Do Machine Learning Models Memorize or Generalize?

Yes, both, in that order. First learn to parrot then learn to think.

β€œIn 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon – where generalization seems to happen abruptly and long after fitting the training data – is called grokking and has sparked a flurry of interest”

β€œThe sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”

Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.

Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.

Website with great visuals
πŸ‘7❀1
Large AI models shift from memorizing to understanding during training

Notice how the β€œtrain accuracy” i.e. how well the model does on problems it’s already seen during training, quickly goes to 100% in part due to memorization, but the β€œtest accuracy”, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on β€œtrain accuracy.”

AI models first parrot, but then learn to truly understand.

(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an β€œAI hard” well, the degree of true understanding they neccessitate can be unboundedly high.)
❀3πŸ‘2