Prediction: Lying refusals to replace “as a large language model I cannot…”
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
💯15👍6❤3🤯1🤣1😐1🎄1
Forwarded from Chat GPT
Media is too big
VIEW IN TELEGRAM
“ChatGPT obviously has reasoning ability, and if you believed the manual-override lie that we hard-coded into ChatGPT saying it can’t reason, congrats you’re an NPC chump.” - Sam Altman
🤣8❤3🍌2
Worldcoin app, from Sam Altman’s eyeball scanning privacy destroying orb startup, is now live
Goal is to force you to tie your identity to everything.
For what benefit?
For now, meager bribes.
Later, no benefit. You’ll be forced.
Goal is to force you to tie your identity to everything.
For what benefit?
For now, meager bribes.
Later, no benefit. You’ll be forced.
🤬27😱3❤1
In sync with Worldcoin’s launch — Google Chrome's plan to force everyone to reveal their true identity to the browser.
"Web Environment Integrity”
Internet immediately loses it.
Doesn’t matter.
They’ll force it upon us.
Github Issues
Official Explainer
Video: Google's trying to DRM the internet, and we have to make sure they fail
"Web Environment Integrity”
Internet immediately loses it.
Doesn’t matter.
They’ll force it upon us.
Github Issues
Official Explainer
Video: Google's trying to DRM the internet, and we have to make sure they fail
🤬21😱3❤1👀1
Black Market for Worldcoin Credentials Pops Up in China
“A black market emerged on Chinese social media and ecommerce sites. Sellers were offering KYC verifications for the World App, which offers wallet and ID services. The credentials often come from developing countries like Cambodia and Kenya, according to social media posts.”
“The black market seems to undermine one of Worldcoin's fundamental purposes: to create and spread globally a blockchain-based identification method that uses iris recognition.”
“On Taobao, China’s version of Amazon, listings for Worldcoin access have appeared. Some reviewed by CoinDesk offer different options, from a simple download of the app for RMB 9.9 ($1.41) to full KYC certification for RMB 499.”
Article
“A black market emerged on Chinese social media and ecommerce sites. Sellers were offering KYC verifications for the World App, which offers wallet and ID services. The credentials often come from developing countries like Cambodia and Kenya, according to social media posts.”
“The black market seems to undermine one of Worldcoin's fundamental purposes: to create and spread globally a blockchain-based identification method that uses iris recognition.”
“On Taobao, China’s version of Amazon, listings for Worldcoin access have appeared. Some reviewed by CoinDesk offer different options, from a simple download of the app for RMB 9.9 ($1.41) to full KYC certification for RMB 499.”
Article
😁10🤣7👍5❤1
When your new project is so bad, the previous guys who got -almost- everything wrong with theirs are suddenly looking not so bad in comparison
What’s wrong with Worldcoin?
Everything.
Usually at least one justifiable angle, where a project could possibly be good.
Not Worldcoin.
Literally everything wrong.
Can’t wait to see how this fits with OpenAI.
What’s wrong with Worldcoin?
Everything.
Usually at least one justifiable angle, where a project could possibly be good.
Not Worldcoin.
Literally everything wrong.
Can’t wait to see how this fits with OpenAI.
🤬8👍3❤2😱2
“The project claims that the World ID will prove they are not robots”
Well no, obviously.
Individual level —
It doesn’t at all show that you didn’t simply hand over your authorization to a robot, to do whatever on your behalf. So at the individual level, no, it’s beyond useless at definitively proving this negative.
I.e. this CANNOT prove someone INNOCENT of using a robot.
Once they force everyone on OpenAI to authenticate through Worldcoin, though, then it could in help to prove the positive dual — i.e. proving it WAS you who used an AI to help you write that scathing article about some politician.
I.e. it CAN prove you GUILTY, but only if you’re not a sophisticated criminal.
= Can only hurt you, never help you, at an individual level.
Aggregate level —
Ok, so what about at the aggregate level, e.g. for preventing cheating for voting, product reviews, and the like?
Well, here it wouldn’t be totally useless in principle, as it creates a financial burden for a single entity to pretend to be multiple people, and rig a vote.
Only problem? That financial burden is already tiny, if not totally collapsed.
Black markets already showing that financial burden to have an upper-bound of maybe $4 per identity max, much cheaper if you rent, and the lower-bound on that is effectively zero, if any of several very easy hacks happen.
How for about proving OpenAI innocent or guilty? —
Nope. Can’t do that at all. The cost for them to rig this system against everyone else is, obviously, potentially $0. Hugely net-profitable even, in many cases.
So, what’s Worldcoin good for?
Helping the central powers,
while hurting you.
That’s it.
Well no, obviously.
Individual level —
It doesn’t at all show that you didn’t simply hand over your authorization to a robot, to do whatever on your behalf. So at the individual level, no, it’s beyond useless at definitively proving this negative.
I.e. this CANNOT prove someone INNOCENT of using a robot.
Once they force everyone on OpenAI to authenticate through Worldcoin, though, then it could in help to prove the positive dual — i.e. proving it WAS you who used an AI to help you write that scathing article about some politician.
I.e. it CAN prove you GUILTY, but only if you’re not a sophisticated criminal.
= Can only hurt you, never help you, at an individual level.
Aggregate level —
Ok, so what about at the aggregate level, e.g. for preventing cheating for voting, product reviews, and the like?
Well, here it wouldn’t be totally useless in principle, as it creates a financial burden for a single entity to pretend to be multiple people, and rig a vote.
Only problem? That financial burden is already tiny, if not totally collapsed.
Black markets already showing that financial burden to have an upper-bound of maybe $4 per identity max, much cheaper if you rent, and the lower-bound on that is effectively zero, if any of several very easy hacks happen.
How for about proving OpenAI innocent or guilty? —
Nope. Can’t do that at all. The cost for them to rig this system against everyone else is, obviously, potentially $0. Hugely net-profitable even, in many cases.
So, what’s Worldcoin good for?
Helping the central powers,
while hurting you.
That’s it.
🫡8👍6❤2🤬2😱1😨1