In sync with Worldcoin’s launch — Google Chrome's plan to force everyone to reveal their true identity to the browser.
"Web Environment Integrity”
Internet immediately loses it.
Doesn’t matter.
They’ll force it upon us.
Github Issues
Official Explainer
Video: Google's trying to DRM the internet, and we have to make sure they fail
"Web Environment Integrity”
Internet immediately loses it.
Doesn’t matter.
They’ll force it upon us.
Github Issues
Official Explainer
Video: Google's trying to DRM the internet, and we have to make sure they fail
🤬21😱3❤1👀1
Black Market for Worldcoin Credentials Pops Up in China
“A black market emerged on Chinese social media and ecommerce sites. Sellers were offering KYC verifications for the World App, which offers wallet and ID services. The credentials often come from developing countries like Cambodia and Kenya, according to social media posts.”
“The black market seems to undermine one of Worldcoin's fundamental purposes: to create and spread globally a blockchain-based identification method that uses iris recognition.”
“On Taobao, China’s version of Amazon, listings for Worldcoin access have appeared. Some reviewed by CoinDesk offer different options, from a simple download of the app for RMB 9.9 ($1.41) to full KYC certification for RMB 499.”
Article
“A black market emerged on Chinese social media and ecommerce sites. Sellers were offering KYC verifications for the World App, which offers wallet and ID services. The credentials often come from developing countries like Cambodia and Kenya, according to social media posts.”
“The black market seems to undermine one of Worldcoin's fundamental purposes: to create and spread globally a blockchain-based identification method that uses iris recognition.”
“On Taobao, China’s version of Amazon, listings for Worldcoin access have appeared. Some reviewed by CoinDesk offer different options, from a simple download of the app for RMB 9.9 ($1.41) to full KYC certification for RMB 499.”
Article
😁10🤣7👍5❤1
When your new project is so bad, the previous guys who got -almost- everything wrong with theirs are suddenly looking not so bad in comparison
What’s wrong with Worldcoin?
Everything.
Usually at least one justifiable angle, where a project could possibly be good.
Not Worldcoin.
Literally everything wrong.
Can’t wait to see how this fits with OpenAI.
What’s wrong with Worldcoin?
Everything.
Usually at least one justifiable angle, where a project could possibly be good.
Not Worldcoin.
Literally everything wrong.
Can’t wait to see how this fits with OpenAI.
🤬8👍3❤2😱2
“The project claims that the World ID will prove they are not robots”
Well no, obviously.
Individual level —
It doesn’t at all show that you didn’t simply hand over your authorization to a robot, to do whatever on your behalf. So at the individual level, no, it’s beyond useless at definitively proving this negative.
I.e. this CANNOT prove someone INNOCENT of using a robot.
Once they force everyone on OpenAI to authenticate through Worldcoin, though, then it could in help to prove the positive dual — i.e. proving it WAS you who used an AI to help you write that scathing article about some politician.
I.e. it CAN prove you GUILTY, but only if you’re not a sophisticated criminal.
= Can only hurt you, never help you, at an individual level.
Aggregate level —
Ok, so what about at the aggregate level, e.g. for preventing cheating for voting, product reviews, and the like?
Well, here it wouldn’t be totally useless in principle, as it creates a financial burden for a single entity to pretend to be multiple people, and rig a vote.
Only problem? That financial burden is already tiny, if not totally collapsed.
Black markets already showing that financial burden to have an upper-bound of maybe $4 per identity max, much cheaper if you rent, and the lower-bound on that is effectively zero, if any of several very easy hacks happen.
How for about proving OpenAI innocent or guilty? —
Nope. Can’t do that at all. The cost for them to rig this system against everyone else is, obviously, potentially $0. Hugely net-profitable even, in many cases.
So, what’s Worldcoin good for?
Helping the central powers,
while hurting you.
That’s it.
Well no, obviously.
Individual level —
It doesn’t at all show that you didn’t simply hand over your authorization to a robot, to do whatever on your behalf. So at the individual level, no, it’s beyond useless at definitively proving this negative.
I.e. this CANNOT prove someone INNOCENT of using a robot.
Once they force everyone on OpenAI to authenticate through Worldcoin, though, then it could in help to prove the positive dual — i.e. proving it WAS you who used an AI to help you write that scathing article about some politician.
I.e. it CAN prove you GUILTY, but only if you’re not a sophisticated criminal.
= Can only hurt you, never help you, at an individual level.
Aggregate level —
Ok, so what about at the aggregate level, e.g. for preventing cheating for voting, product reviews, and the like?
Well, here it wouldn’t be totally useless in principle, as it creates a financial burden for a single entity to pretend to be multiple people, and rig a vote.
Only problem? That financial burden is already tiny, if not totally collapsed.
Black markets already showing that financial burden to have an upper-bound of maybe $4 per identity max, much cheaper if you rent, and the lower-bound on that is effectively zero, if any of several very easy hacks happen.
How for about proving OpenAI innocent or guilty? —
Nope. Can’t do that at all. The cost for them to rig this system against everyone else is, obviously, potentially $0. Hugely net-profitable even, in many cases.
So, what’s Worldcoin good for?
Helping the central powers,
while hurting you.
That’s it.
🫡8👍6❤2🤬2😱1😨1
“proof of personhood”
Correction vitalik: those are all trivially broken.
And no, combining those all together into an even more obfuscated mess does nothing.
- Nothing but hide the reality that they don’t actually solve the problem at all.
Just like your gigantic wordcel articles, vitalik.
Wait, social-graph-based? You mean the web-of-trust methods have been thoroughly tried and thoroughly failed for over THIRTY YEARS.
Hardware solutions? You mean trusted execution environments, so plagued with constant hacks over the decades that even the major chip manufacturers have finally given up on it for most of their chips?
Nothing good about human-identity-based solutions.
Only ever helps the central powers, hurts the people.
Vitalik’s Article
Correction vitalik: those are all trivially broken.
And no, combining those all together into an even more obfuscated mess does nothing.
- Nothing but hide the reality that they don’t actually solve the problem at all.
Just like your gigantic wordcel articles, vitalik.
Wait, social-graph-based? You mean the web-of-trust methods have been thoroughly tried and thoroughly failed for over THIRTY YEARS.
Hardware solutions? You mean trusted execution environments, so plagued with constant hacks over the decades that even the major chip manufacturers have finally given up on it for most of their chips?
Nothing good about human-identity-based solutions.
Only ever helps the central powers, hurts the people.
Vitalik’s Article
👏8❤2👍2🤬1💅1
EPIC on Worldcoin
“Worldcoin is a potential privacy nightmare that offers a biometrics-dependent vision of digital identity and cryptocurrency, and would place Sam Altman’s Tools for Humanity company at the center of digital governance. Worldcoin’s approach creates serious privacy risks by bribing the poorest and most vulnerable people to turn over unchangeable biometrics like iris scans and facial recognition images in exchange for a small payout. Mass collections of biometrics like Worldcoin threaten people’s privacy on a grand scale, both if the company misuses the information it collects, and if that data is stolen. Ultimately, Worldcoin wants to become the default digital ID and a global currency without democratic buy-in at the start, that alone is a compelling reason not to turn over your biometrics, personal information, and geolocation data to a private company. We urge regulatory agencies around the world to closely scrutinize Worldcoin.”
Only thing they’re off the mark on is privacy. Worldcoin WANTS you to think this is all just whining about privacy.
It’s about control, extreme exploitation, monopolization. This is among the most successful evil plans to take over the whole world we’ve seen in decades, and now really starting to come together.
Statement
“Worldcoin is a potential privacy nightmare that offers a biometrics-dependent vision of digital identity and cryptocurrency, and would place Sam Altman’s Tools for Humanity company at the center of digital governance. Worldcoin’s approach creates serious privacy risks by bribing the poorest and most vulnerable people to turn over unchangeable biometrics like iris scans and facial recognition images in exchange for a small payout. Mass collections of biometrics like Worldcoin threaten people’s privacy on a grand scale, both if the company misuses the information it collects, and if that data is stolen. Ultimately, Worldcoin wants to become the default digital ID and a global currency without democratic buy-in at the start, that alone is a compelling reason not to turn over your biometrics, personal information, and geolocation data to a private company. We urge regulatory agencies around the world to closely scrutinize Worldcoin.”
Only thing they’re off the mark on is privacy. Worldcoin WANTS you to think this is all just whining about privacy.
It’s about control, extreme exploitation, monopolization. This is among the most successful evil plans to take over the whole world we’ve seen in decades, and now really starting to come together.
Statement
❤9👍7🍌1
OpenAI Quietly Shuts Down Its AI Detection Tool
ChatGPT creator OpenAI quietly unplugged its AI detection tool, AI Classifier, last week because of “its low rate of accuracy,” the firm said. The explanation was not in a new announcement, but added in a note added to the blog post that first announced the tool. The link to OpenAI's classifier is no longer available.
OpenAI called the classifier "not fully reliable," adding that the evaluations on a “challenge set” of English texts correctly identified 26% of AI-written text as “likely AI-written,” while incorrectly labeling the human-written text as AI-written 9% of the time.
Article
ChatGPT creator OpenAI quietly unplugged its AI detection tool, AI Classifier, last week because of “its low rate of accuracy,” the firm said. The explanation was not in a new announcement, but added in a note added to the blog post that first announced the tool. The link to OpenAI's classifier is no longer available.
OpenAI called the classifier "not fully reliable," adding that the evaluations on a “challenge set” of English texts correctly identified 26% of AI-written text as “likely AI-written,” while incorrectly labeling the human-written text as AI-written 9% of the time.
Article
❤11👾2👍1😱1
Day 3 of Worldcoin Launch
🤬11😨4😱2😐2❤1