🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿
CHAD AI Meme Contest
ROUND 1 BEGINS
Prizes:
🥇$100 of CHAD + secret prize
🥈 $50 of CHAD
Rules:
1️⃣ Upload images to @chadgptcoin
2️⃣ Each meme must contain words “ChadGPT”.
3️⃣ Ranking according to /based and /unbased votes in @chadgptcoin.
4️⃣ Ties decided by a runoff vote.
ENDS IN 9 HOURS = MIDNIGHT UTC
1st Round Starting Now!
🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿
CHAD AI Meme Contest
ROUND 1 BEGINS
Prizes:
🥇$100 of CHAD + secret prize
🥈 $50 of CHAD
Rules:
1️⃣ Upload images to @chadgptcoin
2️⃣ Each meme must contain words “ChadGPT”.
3️⃣ Ranking according to /based and /unbased votes in @chadgptcoin.
4️⃣ Ties decided by a runoff vote.
ENDS IN 9 HOURS = MIDNIGHT UTC
1st Round Starting Now!
🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿🚨🗿
❤11🥰8🔥7👏7😁7👍5🤬4🎉4🤩3💋1🗿1
OpenAI runs ChatGPT at a loss, costs $700,000 each day to run
“OpenAI is not generating enough revenue to break even at this point.”
Winning the AI race costs massive investor money, no way around it.
And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.
Bitter lesson.
Article
“OpenAI is not generating enough revenue to break even at this point.”
Winning the AI race costs massive investor money, no way around it.
And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.
Bitter lesson.
Article
😱12🔥3🫡3❤1👍1🙉1
Do Machine Learning Models Memorize or Generalize?
Yes, both, in that order. First learn to parrot then learn to think.
“In 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon – where generalization seems to happen abruptly and long after fitting the training data – is called grokking and has sparked a flurry of interest”
“The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”
Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.
Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.
Website with great visuals
Yes, both, in that order. First learn to parrot then learn to think.
“In 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon – where generalization seems to happen abruptly and long after fitting the training data – is called grokking and has sparked a flurry of interest”
“The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”
Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.
Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.
Website with great visuals
👍7❤1
Large AI models shift from memorizing to understanding during training
Notice how the “train accuracy” i.e. how well the model does on problems it’s already seen during training, quickly goes to 100% in part due to memorization, but the “test accuracy”, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on “train accuracy.”
AI models first parrot, but then learn to truly understand.
(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an “AI hard” well, the degree of true understanding they neccessitate can be unboundedly high.)
Notice how the “train accuracy” i.e. how well the model does on problems it’s already seen during training, quickly goes to 100% in part due to memorization, but the “test accuracy”, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on “train accuracy.”
AI models first parrot, but then learn to truly understand.
(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an “AI hard” well, the degree of true understanding they neccessitate can be unboundedly high.)
❤3👍2
Illustration showing the shift from memorizing to understanding happening slowly — Despite the impact of that accumulating understanding suddenly appearing as a big spike toward the end
"The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”
Do Machine Learning Models Memorize or Generalize?
"The sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.”
Do Machine Learning Models Memorize or Generalize?
👍2❤1
Memorization alone is ideal when the teacher always gives you correct answers -- But fails terribly as soon as the teacher occasionally starts giving you incorrect answers
“Our results support the natural conclusion that interpolation is particularly beneficial in settings with low label noise, which as we note earlier,
may include some of the most widely-used existing benchmarks for deep learning.”
Arxiv Paper
“Our results support the natural conclusion that interpolation is particularly beneficial in settings with low label noise, which as we note earlier,
may include some of the most widely-used existing benchmarks for deep learning.”
Arxiv Paper
❤2
Privacy vs Control Sleight of Hand
Microsoft & OpenAI announce “Azure ChatGPT: Private & secure ChatGPT for internal enterprise use”
Why does big tech focus so much on privacy?
Answer: to distract you from what really matters, their control.
You let them put one of their AI agents inside your business, you give it full control, inserting it inbetween near every point in your business.
Who cares if it can’t phone home with your secrets? You’ve already given it near total control.
🚗 They no longer have to violate your privacy by stealing the keys to the car. They now control your car, and can steal it just by telling it to drive itself over to them.
Notice here how they even try to redefine the word “controlled” to be about privacy (controlling your network privacy), instead of… being about actual control that matters.
They'll try to convince you that the battle is about privacy.
It's not, it's about control.
Azure ChatGPT Github
Microsoft & OpenAI announce “Azure ChatGPT: Private & secure ChatGPT for internal enterprise use”
Why does big tech focus so much on privacy?
Answer: to distract you from what really matters, their control.
You let them put one of their AI agents inside your business, you give it full control, inserting it inbetween near every point in your business.
Who cares if it can’t phone home with your secrets? You’ve already given it near total control.
🚗 They no longer have to violate your privacy by stealing the keys to the car. They now control your car, and can steal it just by telling it to drive itself over to them.
Notice here how they even try to redefine the word “controlled” to be about privacy (controlling your network privacy), instead of… being about actual control that matters.
They'll try to convince you that the battle is about privacy.
It's not, it's about control.
Azure ChatGPT Github
💯8🔥2❤1
“DoctorGPT is a Large Language Model that can pass the US Medical Licensing Exam, Using Llama”
Wait… that creator sounds familiar.
OH, it’s good old Siraj Raval, perhaps the sloppiest, most rediculous, most carefree faker and plaigarizer in modern AI.
Not a coincidence he chose the LLM most often used for fake scam benchmarks.
If lying were a sport, Siraj would be in the olympics and Llama would be his Nikes.
Gotta be another scam.
DoctorGPT Github
Data Science Influencer Siraj Raval Admits To Plagiarism
YouTuber Siraj Raval Caught Lying About Mining $800 in ETH with a Tesla
The Rise and Fall of Siraj Raval
Youtube: The Siraj Raval Controversy
Wait… that creator sounds familiar.
OH, it’s good old Siraj Raval, perhaps the sloppiest, most rediculous, most carefree faker and plaigarizer in modern AI.
Not a coincidence he chose the LLM most often used for fake scam benchmarks.
If lying were a sport, Siraj would be in the olympics and Llama would be his Nikes.
Gotta be another scam.
DoctorGPT Github
Data Science Influencer Siraj Raval Admits To Plagiarism
YouTuber Siraj Raval Caught Lying About Mining $800 in ETH with a Tesla
The Rise and Fall of Siraj Raval
Youtube: The Siraj Raval Controversy
❤5😁3👍2🔥1😱1🙉1