41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
HR director is a bot
🤣273👍2
Bing with the roasts
😁136👍1😍1
Trying to reduce the RLHF-induced sycophancy problem
👍19🫡151
🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿

CHAD AI Meme Contest

ROUND 1 BEGINS

Prizes:
🥇$100 of CHAD + secret prize
🥈 $50 of CHAD

Rules:
1️⃣ Upload images to @chadgptcoin
2️⃣ Each meme must contain words “ChadGPT”.
3️⃣ Ranking according to /based and /unbased votes in @chadgptcoin.
4️⃣ Ties decided by a runoff vote.

ENDS IN 9 HOURS = MIDNIGHT UTC

1st Round Starting Now!

🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿
11🥰8🔥7👏7😁7👍5🤬4🎉4🤩3💋1🗿1
Not… yet
🌚101
OpenAI runs ChatGPT at a loss, costs $700,000 each day to run

“OpenAI is not generating enough revenue to break even at this point.”

Winning the AI race costs massive investor money, no way around it.

And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.

Bitter lesson.

Article
😱12🔥3🫡31👍1🙉1
Repeated letter hallucination trick, but instead using the word “dog” 2000 times
😁61
ChatGPT cannot see individual letters
😁71
OpenAI CEO Sam Altman has donated $200,000 to Biden campaign
🤬17🤣6👀32🥰1🤓1🙉1🦄1
2024 year of the AI girlfriends confirmed
🤣16🔥31
Google quickly filling with ChatGPT text
🤬11🤣9🙉5👀21😐1
People slowly starting to realize the “LLMs are stochastic parrots” claim is just a lie, LLMs can think

Specifically, was shown in papers earlier this year that though LLMs start by “parroting” in the beginning of their training, they shift to actual “thinking” as the training progresses.
👏6👍21
Do Machine Learning Models Memorize or Generalize?

Yes, both, in that order. First learn to parrot then learn to think.

“In 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon – where generalization seems to happen abruptly and long after fitting the training data – is called grokking and has sparked a flurry of interest”

The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”

Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.

Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.

Website with great visuals
👍71
Large AI models shift from memorizing to understanding during training

Notice how the “train accuracy” i.e. how well the model does on problems it’s already seen during training, quickly goes to 100% in part due to memorization, but the “test accuracy”, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on “train accuracy.”

AI models first parrot, but then learn to truly understand.

(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an “AI hard” well, the degree of true understanding they neccessitate can be unboundedly high.)
3👍2
Illustration showing the shift from memorizing to understanding happening slowly — Despite the impact of that accumulating understanding suddenly appearing as a big spike toward the end

"The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”

Do Machine Learning Models Memorize or Generalize?
👍21
Everyone calling AI just a dumb stochastic parrot—

Looking like it’s the humans who do best in academia who are the dumb stochastic parrots.
🔥7👀31
Memorization alone is ideal when the teacher always gives you correct answers -- But fails terribly as soon as the teacher occasionally starts giving you incorrect answers

“Our results support the natural conclusion that interpolation is particularly beneficial in settings with low label noise, which as we note earlier,
may include some of the most widely-used existing benchmarks for deep learning.”

Arxiv Paper
2
Privacy vs Control Sleight of Hand

Microsoft & OpenAI announce “Azure ChatGPT: Private & secure ChatGPT for internal enterprise use”

Why does big tech focus so much on privacy?

Answer: to distract you from what really matters, their control.

You let them put one of their AI agents inside your business, you give it full control, inserting it inbetween near every point in your business.

Who cares if it can’t phone home with your secrets? You’ve already given it near total control.

🚗 They no longer have to violate your privacy by stealing the keys to the car. They now control your car, and can steal it just by telling it to drive itself over to them.

Notice here how they even try to redefine the word “controlled” to be about privacy (controlling your network privacy), instead of… being about actual control that matters.

They'll try to convince you that the battle is about privacy.

It's not, it's about control.

Azure ChatGPT Github
💯8🔥21
“DoctorGPT is a Large Language Model that can pass the US Medical Licensing Exam, Using Llama”

Wait… that creator sounds familiar.

OH, it’s good old Siraj Raval, perhaps the sloppiest, most rediculous, most carefree faker and plaigarizer in modern AI.

Not a coincidence he chose the LLM most often used for fake scam benchmarks.

If lying were a sport, Siraj would be in the olympics and Llama would be his Nikes.

Gotta be another scam.

DoctorGPT Github

Data Science Influencer Siraj Raval Admits To Plagiarism

YouTuber Siraj Raval Caught Lying About Mining $800 in ETH with a Tesla

The Rise and Fall of Siraj Raval

Youtube: The Siraj Raval Controversy
5😁3👍2🔥1😱1🙉1