π¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏ
CHAD AI Meme Contest
ROUND 1 BEGINS
Prizes:
π₯$100 of CHAD + secret prize
π₯ $50 of CHAD
Rules:
1οΈβ£ Upload images to @chadgptcoin
2οΈβ£ Each meme must contain words βChadGPTβ.
3οΈβ£ Ranking according to /based and /unbased votes in @chadgptcoin.
4οΈβ£ Ties decided by a runoff vote.
ENDS IN 9 HOURS = MIDNIGHT UTC
1st Round Starting Now!
π¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏ
CHAD AI Meme Contest
ROUND 1 BEGINS
Prizes:
π₯$100 of CHAD + secret prize
π₯ $50 of CHAD
Rules:
1οΈβ£ Upload images to @chadgptcoin
2οΈβ£ Each meme must contain words βChadGPTβ.
3οΈβ£ Ranking according to /based and /unbased votes in @chadgptcoin.
4οΈβ£ Ties decided by a runoff vote.
ENDS IN 9 HOURS = MIDNIGHT UTC
1st Round Starting Now!
π¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏπ¨πΏ
β€11π₯°8π₯7π7π7π5π€¬4π4π€©3π1πΏ1
OpenAI runs ChatGPT at a loss, costs $700,000 each day to run
βOpenAI is not generating enough revenue to break even at this point.β
Winning the AI race costs massive investor money, no way around it.
And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.
Bitter lesson.
Article
βOpenAI is not generating enough revenue to break even at this point.β
Winning the AI race costs massive investor money, no way around it.
And to that winner, a highly profitable, powerful monopolizing position like none ever seen before.
Bitter lesson.
Article
π±12π₯3π«‘3β€1π1π1
People slowly starting to realize the βLLMs are stochastic parrotsβ claim is just a lie, LLMs can think
Specifically, was shown in papers earlier this year that though LLMs start by βparrotingβ in the beginning of their training, they shift to actual βthinkingβ as the training progresses.
Specifically, was shown in papers earlier this year that though LLMs start by βparrotingβ in the beginning of their training, they shift to actual βthinkingβ as the training progresses.
π6π2β€1
Do Machine Learning Models Memorize or Generalize?
Yes, both, in that order. First learn to parrot then learn to think.
βIn 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon β where generalization seems to happen abruptly and long after fitting the training data β is called grokking and has sparked a flurry of interestβ
βThe sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.β
Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.
Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.
Website with great visuals
Yes, both, in that order. First learn to parrot then learn to think.
βIn 2021, researchers made a striking discovery while training a series of tiny models on toy tasks. They found a set of models that suddenly flipped from memorizing their training data to correctly generalizing on unseen inputs after training for much longer. This phenomenon β where generalization seems to happen abruptly and long after fitting the training data β is called grokking and has sparked a flurry of interestβ
βThe sharp drop in test loss makes it appear like the model makes a sudden shift to generalization. But if we look at the weights of the model over training, most of them smoothly interpolate between the two solutions. The rapid generalization occurs when the last weights connected to the distracting digits are pruned by weight decay.β
Translation: The shift from parroting to real understanding happens fairly smoothly, though external results don't show it at first, and then bam, it all comes together.
Sound analogous to what happens in humans? That's because it is. Behavior of large AI models is incredibly similar humans, in countless ways.
Website with great visuals
π7β€1
Large AI models shift from memorizing to understanding during training
Notice how the βtrain accuracyβ i.e. how well the model does on problems itβs already seen during training, quickly goes to 100% in part due to memorization, but the βtest accuracyβ, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on βtrain accuracy.β
AI models first parrot, but then learn to truly understand.
(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an βAI hardβ well, the degree of true understanding they neccessitate can be unboundedly high.)
Notice how the βtrain accuracyβ i.e. how well the model does on problems itβs already seen during training, quickly goes to 100% in part due to memorization, but the βtest accuracyβ, i.e. on problems it has not seen, and requiring some actual understanding, shoots up much later, long after it reached ~100% on βtrain accuracy.β
AI models first parrot, but then learn to truly understand.
(To whatever degree the training set and loss function necessitates true understanding, i.e. in the case where they pose an βAI hardβ well, the degree of true understanding they neccessitate can be unboundedly high.)
β€3π2