JOKES:
When and how will LLMs finally get jokes, and stop just plagiarising them?
When and how will LLMs finally get jokes, and stop just plagiarising them?
Anonymous Poll
33%
GPT-5: Just add more training, and GPT-5 will finally grok jokes
8%
GPT-6: Just add more training, and GPT-6 will finally grok jokes
10%
GPT-7: Just add more training, and GPT-7 will finally grok jokes
29%
NEVER: Just adding more training isnβt enough, change to the model architecture/prompting is needed
20%
Show results
β€28π12π1
π96β€33π₯17π13π₯°11π11π9β‘6πΏ6π€£5π―3
Chat GPT
Problem 2: There are now TONS of CAPTCHA-solving-by cheap humans API services, that you can easily hook into your spam code in 5 minutes Even if you do find this elusive type of problem that 100% of humans can easily do but machines cannot -- Still doesn'tβ¦
Plebbit: Solving the social network censorship problem. Very nice. No wait, total joke.
Bros, captchas for solving spam?
Especially in the decentralized setting where captchas never were viable?
And now, with AI and cheap human outsourcing services that really kill captchas dead for good?
No. Not even close enough to be fixable. Not even the right general direction.
Total joke. No.
Plebbit
Bros, captchas for solving spam?
Especially in the decentralized setting where captchas never were viable?
And now, with AI and cheap human outsourcing services that really kill captchas dead for good?
No. Not even close enough to be fixable. Not even the right general direction.
Total joke. No.
Plebbit
π9β€3π―3π1π1
LLMs: Great or terrible at math?
LLMs are terrible at: arithmetic, i.e. simple mechanical calculations that the cheapest calculator could do
LLMs are great at: the LANGUAGE of math β i.e. the translation of human natural language descriptions into the analogous math language, (which then can be passed off to the mechanical tools.)
Just like most humans.
Did you know: bottlenecks / blinding / inability to access certain information / preventing of memorization β e.g. LLMs surprising inability to effectively work with even a tiny number of digits without losing track and messing it all up β is seen as a critical property of neural network architecture design that enables them to achieve high intelligence?
I.e. the better the memorization, the worse the retardation β all else being equal. e.g. for the same model size.
(Mental arithmetic solving = extremely memorization-heavy)
Intuitively, the blinding of the model from being able to take the easy memorizing & repeating shortcut β is exactly what forces it to do the harder task of figuring out how to solve the hard problems.
Know what else absolutely dominates humans in memorization?
Apes.
Interestingly, in humans there is a massive gender difference in blind memorization ability β but apparently no difference across races.
Socially, consider what this all means in regard to schoolsβ & standardized testsβ slow multi-decade shift away from measuring general intelligence, toward just measuring blind memorization ability.
What a coincidence, that those who are on the path to being ~100% of the teachers, happen to be on the side that excels at memorization.
Now you might see how many of the βsmartβ, who excel in academia through blind memorization can paradoxically seem so stupid at basic reasoning.
Memorization & intelligence β not only separate, but directly at odds, and neural network architecture design gives us big insight into exactly why that is.
Be happy your LLM alone is bad at arithmetic, because if it wasnβt, itβd be much dumber.
LLMs are terrible at: arithmetic, i.e. simple mechanical calculations that the cheapest calculator could do
LLMs are great at: the LANGUAGE of math β i.e. the translation of human natural language descriptions into the analogous math language, (which then can be passed off to the mechanical tools.)
Just like most humans.
Did you know: bottlenecks / blinding / inability to access certain information / preventing of memorization β e.g. LLMs surprising inability to effectively work with even a tiny number of digits without losing track and messing it all up β is seen as a critical property of neural network architecture design that enables them to achieve high intelligence?
I.e. the better the memorization, the worse the retardation β all else being equal. e.g. for the same model size.
(Mental arithmetic solving = extremely memorization-heavy)
Intuitively, the blinding of the model from being able to take the easy memorizing & repeating shortcut β is exactly what forces it to do the harder task of figuring out how to solve the hard problems.
Know what else absolutely dominates humans in memorization?
Apes.
Interestingly, in humans there is a massive gender difference in blind memorization ability β but apparently no difference across races.
Socially, consider what this all means in regard to schoolsβ & standardized testsβ slow multi-decade shift away from measuring general intelligence, toward just measuring blind memorization ability.
What a coincidence, that those who are on the path to being ~100% of the teachers, happen to be on the side that excels at memorization.
Now you might see how many of the βsmartβ, who excel in academia through blind memorization can paradoxically seem so stupid at basic reasoning.
Memorization & intelligence β not only separate, but directly at odds, and neural network architecture design gives us big insight into exactly why that is.
Be happy your LLM alone is bad at arithmetic, because if it wasnβt, itβd be much dumber.
π10π±3β€1π1