Chat GPT
Problem 2: There are now TONS of CAPTCHA-solving-by cheap humans API services, that you can easily hook into your spam code in 5 minutes Even if you do find this elusive type of problem that 100% of humans can easily do but machines cannot -- Still doesn'tβ¦
Plebbit: Solving the social network censorship problem. Very nice. No wait, total joke.
Bros, captchas for solving spam?
Especially in the decentralized setting where captchas never were viable?
And now, with AI and cheap human outsourcing services that really kill captchas dead for good?
No. Not even close enough to be fixable. Not even the right general direction.
Total joke. No.
Plebbit
Bros, captchas for solving spam?
Especially in the decentralized setting where captchas never were viable?
And now, with AI and cheap human outsourcing services that really kill captchas dead for good?
No. Not even close enough to be fixable. Not even the right general direction.
Total joke. No.
Plebbit
π9β€3π―3π1π1
LLMs: Great or terrible at math?
LLMs are terrible at: arithmetic, i.e. simple mechanical calculations that the cheapest calculator could do
LLMs are great at: the LANGUAGE of math β i.e. the translation of human natural language descriptions into the analogous math language, (which then can be passed off to the mechanical tools.)
Just like most humans.
Did you know: bottlenecks / blinding / inability to access certain information / preventing of memorization β e.g. LLMs surprising inability to effectively work with even a tiny number of digits without losing track and messing it all up β is seen as a critical property of neural network architecture design that enables them to achieve high intelligence?
I.e. the better the memorization, the worse the retardation β all else being equal. e.g. for the same model size.
(Mental arithmetic solving = extremely memorization-heavy)
Intuitively, the blinding of the model from being able to take the easy memorizing & repeating shortcut β is exactly what forces it to do the harder task of figuring out how to solve the hard problems.
Know what else absolutely dominates humans in memorization?
Apes.
Interestingly, in humans there is a massive gender difference in blind memorization ability β but apparently no difference across races.
Socially, consider what this all means in regard to schoolsβ & standardized testsβ slow multi-decade shift away from measuring general intelligence, toward just measuring blind memorization ability.
What a coincidence, that those who are on the path to being ~100% of the teachers, happen to be on the side that excels at memorization.
Now you might see how many of the βsmartβ, who excel in academia through blind memorization can paradoxically seem so stupid at basic reasoning.
Memorization & intelligence β not only separate, but directly at odds, and neural network architecture design gives us big insight into exactly why that is.
Be happy your LLM alone is bad at arithmetic, because if it wasnβt, itβd be much dumber.
LLMs are terrible at: arithmetic, i.e. simple mechanical calculations that the cheapest calculator could do
LLMs are great at: the LANGUAGE of math β i.e. the translation of human natural language descriptions into the analogous math language, (which then can be passed off to the mechanical tools.)
Just like most humans.
Did you know: bottlenecks / blinding / inability to access certain information / preventing of memorization β e.g. LLMs surprising inability to effectively work with even a tiny number of digits without losing track and messing it all up β is seen as a critical property of neural network architecture design that enables them to achieve high intelligence?
I.e. the better the memorization, the worse the retardation β all else being equal. e.g. for the same model size.
(Mental arithmetic solving = extremely memorization-heavy)
Intuitively, the blinding of the model from being able to take the easy memorizing & repeating shortcut β is exactly what forces it to do the harder task of figuring out how to solve the hard problems.
Know what else absolutely dominates humans in memorization?
Apes.
Interestingly, in humans there is a massive gender difference in blind memorization ability β but apparently no difference across races.
Socially, consider what this all means in regard to schoolsβ & standardized testsβ slow multi-decade shift away from measuring general intelligence, toward just measuring blind memorization ability.
What a coincidence, that those who are on the path to being ~100% of the teachers, happen to be on the side that excels at memorization.
Now you might see how many of the βsmartβ, who excel in academia through blind memorization can paradoxically seem so stupid at basic reasoning.
Memorization & intelligence β not only separate, but directly at odds, and neural network architecture design gives us big insight into exactly why that is.
Be happy your LLM alone is bad at arithmetic, because if it wasnβt, itβd be much dumber.
π10π±3β€1π1
This media is not supported in your browser
VIEW IN TELEGRAM
Great memorization β great retardation
π17π4π4β€2π1
This media is not supported in your browser
VIEW IN TELEGRAM
Fellas, AI has surpassed the babies
βFor robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific engineering, or else lack the data-efficiency to do so in an amount of time that enables practical use. In this work we explore dense tracking as a representational vehicle to allow faster and more general learning from demonstration. Our approach utilizes TrackAny-Point (TAP) models to isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration. We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together, all from demonstrations that can be collected in minutes.β
Arxiv PDF
βFor robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific engineering, or else lack the data-efficiency to do so in an amount of time that enables practical use. In this work we explore dense tracking as a representational vehicle to allow faster and more general learning from demonstration. Our approach utilizes TrackAny-Point (TAP) models to isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration. We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together, all from demonstrations that can be collected in minutes.β
Arxiv PDF
π₯16β‘5β€4
Google: Go Woke, Translation Broke
Googleβs translation API mysteriously broken since the 25th, Google either too lazy or too incompetent to fix. Many complaining, Google dead silent.
This is why the image translation bot we built for this Channel is offline for the past few days.
Just going to switch it over to OpenAI for translation in a bit.
James Damore was right.
Google Community Thread
Googleβs translation API mysteriously broken since the 25th, Google either too lazy or too incompetent to fix. Many complaining, Google dead silent.
This is why the image translation bot we built for this Channel is offline for the past few days.
Just going to switch it over to OpenAI for translation in a bit.
James Damore was right.
Google Community Thread
π«‘10π€¬5π4β€3π₯2π1π1
So, uh, about this new orange logo OpenAI decided to use whenever ChatGPT is malfunctioningβ¦
π12β€3π€£3π«‘3π2π’2π1π±1π1
Only 18% of Americans have ever used ChatGPT, yet
βYounger generations are typically more inclined to adopt new technology. They're more likely to still be in school, where they can benefit from using generative AI by leveraging it for β¦β
Cheating. The young people in school are all using it for cheating.
This upcoming semester gonna be wild for classes with take-home reports. Nothing so sudden like this ever before.
English teachers rekt.
Article
βYounger generations are typically more inclined to adopt new technology. They're more likely to still be in school, where they can benefit from using generative AI by leveraging it for β¦β
Cheating. The young people in school are all using it for cheating.
This upcoming semester gonna be wild for classes with take-home reports. Nothing so sudden like this ever before.
English teachers rekt.
Article
π€£7β€4π1π₯1π1