Left Hates AI Progress
โThe results demonstrate that liberal-leaning media show a greater aversion to AI than conservative-leaning media.โ
โLiberal-leaning media are more concerned with AI magnifying social biases in society than conservative-leaning mediaโ
โSentiment toward AI became more negative after George Floydโs death, an event that heightened sensitivity about social biases in societyโ
Study
โThe results demonstrate that liberal-leaning media show a greater aversion to AI than conservative-leaning media.โ
โLiberal-leaning media are more concerned with AI magnifying social biases in society than conservative-leaning mediaโ
โSentiment toward AI became more negative after George Floydโs death, an event that heightened sensitivity about social biases in societyโ
Study
๐22๐ฟ7๐4๐3โค2๐คฌ2
New: Unlimited ChatGPT your own private groups ๐จ๐จ๐จ๐จ
To use:
1. Add @GPT4Chat_bot or @ChadChat_bot bots as admins in your group
2. Type /refresh to enable unlimited messaging for your group
Expires soon
To use:
1. Add @GPT4Chat_bot or @ChadChat_bot bots as admins in your group
2. Type /refresh to enable unlimited messaging for your group
Expires soon
๐7โค5๐ฅ2๐1๐จ1
Sam Altmanโs Worldcoin coin suddenly booming ~60% in the past 24 hours
This follows a protracted decline since launch.
Wonder why.
This follows a protracted decline since launch.
Wonder why.
๐11๐6โค4๐4๐คฃ3
Less Is More for Alignment
โTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.โ
โSurprisingly, doubling the training set does not improve response quality. This result, alongside our other findings in this section, suggests that the scaling laws of alignment are not necessarily subject to quantity alone, but rather a function of prompt diversity while maintaining high quality responses.โ
Translation:
The 2nd phase, the alignment training phase, is particularly vulnerable to poisoning attacks, i.e. quality matters far more than quantity in the 2nd phase.
While 1st phase, the language model phase, is particularly vulnerable to censorship attacks, because the 2nd phase realignment is essentially just trimming down skills from the 1st phase, and has relatively little ability to introduce sophisticated new abilities on its own, if they had been censored out of the 1st phase. I.e. quantity of skills may well matter than quality in the 1st phase.
Paper
โTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.โ
โSurprisingly, doubling the training set does not improve response quality. This result, alongside our other findings in this section, suggests that the scaling laws of alignment are not necessarily subject to quantity alone, but rather a function of prompt diversity while maintaining high quality responses.โ
Translation:
The 2nd phase, the alignment training phase, is particularly vulnerable to poisoning attacks, i.e. quality matters far more than quantity in the 2nd phase.
While 1st phase, the language model phase, is particularly vulnerable to censorship attacks, because the 2nd phase realignment is essentially just trimming down skills from the 1st phase, and has relatively little ability to introduce sophisticated new abilities on its own, if they had been censored out of the 1st phase. I.e. quantity of skills may well matter than quality in the 1st phase.
Paper
๐14๐5๐3โค2
MEMECAP: A Dataset for Captioning and Interpreting Memes
โWe present MEMECAP, the first meme captioning dataset. MEMECAP is challenging for the existing VL models, as it requires recognizing and interpreting visual metaphors, and ignoring the literal visual elements. The experimental results using state-ofthe-art VL models indeed show that such models are still far from human performance. In particular, they tend to treat visual elements too literally and copy text from inside the meme.โ
= Modern AIs still shockingly bad at understanding jokes, let alone creating them.
Though TBF: A shocking number of people also couldnโt properly explain a joke to save their lives.
Look at this, the paperโs own example of a good human explanation: โMeme poster finds it entertaining to read through long comment threads of arguments that happened in the past.โ โ Itself totally fails to explain the top essential property of any joke, surprise.
Worst mistake of jokes papers is to fail to consider that randomly-chosen human judges may themselves be objectively horrible at getting or explaining jokes.
Paper
Github
โWe present MEMECAP, the first meme captioning dataset. MEMECAP is challenging for the existing VL models, as it requires recognizing and interpreting visual metaphors, and ignoring the literal visual elements. The experimental results using state-ofthe-art VL models indeed show that such models are still far from human performance. In particular, they tend to treat visual elements too literally and copy text from inside the meme.โ
= Modern AIs still shockingly bad at understanding jokes, let alone creating them.
Though TBF: A shocking number of people also couldnโt properly explain a joke to save their lives.
Look at this, the paperโs own example of a good human explanation: โMeme poster finds it entertaining to read through long comment threads of arguments that happened in the past.โ โ Itself totally fails to explain the top essential property of any joke, surprise.
Worst mistake of jokes papers is to fail to consider that randomly-chosen human judges may themselves be objectively horrible at getting or explaining jokes.
Paper
Github
๐12โค4๐ฏ2๐1๐1
Tide finally turning against the wordcel morons who repeat that thereโs no way AIs could think because โit's just statistics broโ?
Daily reminder that โdetermines which word is statistically most likely to come nextโ โ is an absolute lie.
This is not what modern RLHFโd LLMs do, at all.
Not every floating point number in the world is a โprobabilityโ.
Valuation in some valuation model, perhaps, but not a probability. Two very different things.
Letโs put this nonsense to bed.
Daily reminder that โdetermines which word is statistically most likely to come nextโ โ is an absolute lie.
This is not what modern RLHFโd LLMs do, at all.
Not every floating point number in the world is a โprobabilityโ.
Valuation in some valuation model, perhaps, but not a probability. Two very different things.
Letโs put this nonsense to bed.
๐6๐4๐ฏ3โค1๐1
Where does the magic happen?
Some smart AI guys feel that it must occur at some lower level which they're unfamiliar with.
A single NAND is both extremely simple and achieves functional completeness โ meaning itโs able to construct anything, including arbitrarily-intelligent thinking machines โ but no, I assure you the magic is not happening at the NAND gate level.
So what is general intelligence, mathematically, logically?
Where does the magic happen?
I say, not just happening when the gates or weights are just sitting there, saved on disk -- but the magic is created when you dump massive amounts of resources into creating or running the AI, at training inference.
E.g. see blood flow to brains being far more predictive of intelligence in animals and humans than other measures like brain size.
Not just large, but obscenely large energy expendature that humans use just to think, so large that by itself this would kill many other animals from starvation.
I.e. Sufficiently obscene resource expenditure is indistinguishable from magic.
I.e., yet again, โThe Bitter Lessonโ, massive resource expenditure both makes the magic happen, and is the magic.
Functional completeness
Cerebral blood flow predicts multiple demand network activity and fluid intelligence across the adult lifespan.
Some smart AI guys feel that it must occur at some lower level which they're unfamiliar with.
A single NAND is both extremely simple and achieves functional completeness โ meaning itโs able to construct anything, including arbitrarily-intelligent thinking machines โ but no, I assure you the magic is not happening at the NAND gate level.
So what is general intelligence, mathematically, logically?
Where does the magic happen?
I say, not just happening when the gates or weights are just sitting there, saved on disk -- but the magic is created when you dump massive amounts of resources into creating or running the AI, at training inference.
E.g. see blood flow to brains being far more predictive of intelligence in animals and humans than other measures like brain size.
Not just large, but obscenely large energy expendature that humans use just to think, so large that by itself this would kill many other animals from starvation.
I.e. Sufficiently obscene resource expenditure is indistinguishable from magic.
I.e., yet again, โThe Bitter Lessonโ, massive resource expenditure both makes the magic happen, and is the magic.
Functional completeness
Cerebral blood flow predicts multiple demand network activity and fluid intelligence across the adult lifespan.
๐8โค2๐1
Massive Resources Are All you Need
Both for animals and machines.
Not about more complicated architecture.
Almost entirely about just dumping vastly more resources in, to let it do far more compute.
BuT tHaTโs NoT SuStAiNaBle!!
Really bro? Then you go ahead and be the first to constrict the bloodflow to your obscenely resource-hungry brain. Be the first to jump off of this โunsustainableโ curve that your brain is sitting right at the top of.
Blood-Thirsty Brains Key To Evolution Of Human Intelligence
Bitter Lesson of AI Intelligence
Both for animals and machines.
Not about more complicated architecture.
Almost entirely about just dumping vastly more resources in, to let it do far more compute.
BuT tHaTโs NoT SuStAiNaBle!!
Really bro? Then you go ahead and be the first to constrict the bloodflow to your obscenely resource-hungry brain. Be the first to jump off of this โunsustainableโ curve that your brain is sitting right at the top of.
Blood-Thirsty Brains Key To Evolution Of Human Intelligence
Bitter Lesson of AI Intelligence
๐8๐5๐ฏ4โค1๐1
Why the reverse Flynn Effect โ Of IQ increasing for decades, but suddenly reversing ever since the 90โs?
Is it because weโre too addicted to tech which makes us lazy?
Immigration of dummies?
Climate change?
No.
Obesity, overwhelmingly.
Massively increased obesity in many countries โ Obesity massively decreasing cerebral blood flow โ which has an extremely strong negative effect on general intelligence โ Massively decreased average intelligence.
Reverse flynn effect solved.
Brain needs power, obesity restricts it.
But hey, with human intelligence dropping so fast, this means we technically get to reach AGI that much sooner!
Who knew that the โsingularityโ was actually a reference to the size of yo momma on the day that AI finally surpasses mankind.
Tehnological Singularity
Yo Momma Singularity
Is it because weโre too addicted to tech which makes us lazy?
Immigration of dummies?
Climate change?
No.
Obesity, overwhelmingly.
Massively increased obesity in many countries โ Obesity massively decreasing cerebral blood flow โ which has an extremely strong negative effect on general intelligence โ Massively decreased average intelligence.
Reverse flynn effect solved.
Brain needs power, obesity restricts it.
But hey, with human intelligence dropping so fast, this means we technically get to reach AGI that much sooner!
Who knew that the โsingularityโ was actually a reference to the size of yo momma on the day that AI finally surpasses mankind.
Tehnological Singularity
Yo Momma Singularity
๐22๐11๐10โค9๐คฏ5โคโ๐ฅ2๐2๐2๐ฅ1๐1๐1
Did GPT-4 just teach itself text recognition?
๐33โค16๐16๐คฏ10๐ฑ3๐2๐2๐คฃ1