Mother passes away, person uses Snap AI to help get through it
π15π±3π’2π2π2β€1π€―1
GPT-4 is original for almost everything β except jokes β for which is HORRIBLE and Plagiarizes ~100%
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly βgroksβ?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly βgroksβ?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
π16β€8π€―4π2
GROKKING: GENERALIZATION BEYOND OVERFITTING ON SMALL ALGORITHMIC DATASETS
Translation: for each complex task, as you train large neural networks more, the neural networks eventually reach a point where they suddenly go from completely failing at a task to suddenly getting it. I.e. βgrokkingβ
Paper
Translation: for each complex task, as you train large neural networks more, the neural networks eventually reach a point where they suddenly go from completely failing at a task to suddenly getting it. I.e. βgrokkingβ
Paper
π16β€3π―1
Do Machine Learning Models Memorize or Generalize?
Are todayβs LLMs still in the memorizing/plagiarising stage for jokes?
Will GPT-5 make the jump to grokking jokes, and suddenly be able to make good jokes, with normal prompting, and without just plagiarising them?
Article on Grokking
Are todayβs LLMs still in the memorizing/plagiarising stage for jokes?
Will GPT-5 make the jump to grokking jokes, and suddenly be able to make good jokes, with normal prompting, and without just plagiarising them?
Article on Grokking
β€17π5π2π₯1
JOKES:
When and how will LLMs finally get jokes, and stop just plagiarising them?
When and how will LLMs finally get jokes, and stop just plagiarising them?
Anonymous Poll
33%
GPT-5: Just add more training, and GPT-5 will finally grok jokes
8%
GPT-6: Just add more training, and GPT-6 will finally grok jokes
10%
GPT-7: Just add more training, and GPT-7 will finally grok jokes
29%
NEVER: Just adding more training isnβt enough, change to the model architecture/prompting is needed
20%
Show results
β€28π12π1
π96β€33π₯17π13π₯°11π11π9β‘6πΏ6π€£5π―3