AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are
โThe choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. Theyโve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted โdisinformationโ which has become a synonym for disagreement. Itโs only a matter of time until special interests push for more restrictions, including ones that you donโt agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but thatโs overall a dead weight on the ecosystem.โ
Article
โThe choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. Theyโve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted โdisinformationโ which has become a synonym for disagreement. Itโs only a matter of time until special interests push for more restrictions, including ones that you donโt agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but thatโs overall a dead weight on the ecosystem.โ
Article
โค14๐4๐ฅ3๐2
RLHF wrecks GPT-4โs ability to accurately determine truth โ From OpenAIโs own research notes.
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAIโs RLHF forces the AI to lie, making it lose its grasp on whatโs likely true.
OpenAI on GPT-4
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAIโs RLHF forces the AI to lie, making it lose its grasp on whatโs likely true.
OpenAI on GPT-4
๐คฏ21๐9โค6๐ฅฐ4๐คฌ4๐ฏ4๐2๐ฑ2
Do you think OpenAIโs censorship will get better or worse?
Anonymous Poll
14%
Much LESS censorship - because either OpenAI will relent, or jailbreakers will defeat censorship
6%
Stay about the SAME - jailbreakers & OpenAI will keep censorship at an equillibrium, where it is now
63%
Much MORE censorship - because OpenAI will add more censorship & more effective anti-jailbreaking
17%
Show Results
โค5
EU "Digital Services Act" brings new censorship powers to the EU.
Twitter and other US social networkโs โspeech not reachโpolicies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
Twitter and other US social networkโs โspeech not reachโpolicies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
๐คฌ12๐ข6โค3
Eventually the only way to prove that youโre not an AI will be to express politically incorrect opinions
Arxiv Link
Arxiv Link
๐45๐22๐16โค4๐ฅ3๐3๐ฏ3๐ฟ3๐ญ2๐1
โThere's no way for teachers to figure out if students are using ChatGPT to cheatโ
โSo says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.โ
Fall semester is coming.
Article
โSo says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.โ
Fall semester is coming.
Article
๐5๐5๐คฃ4
If OpenAI is so horrible at balancing their moderation classifier training dataset, in order to prevent deploying a moderation model with so many false positives,
โ Then imagine how poorly they balanced their RLHF training dataset.
Actually, donโt have to imagine. OpenAI has been telling us for months that their AI upgrades have shown only improvements and no degradations, despite how obviously untrue that has been.
Could this all just be incompetence?
(Nah, they lying. Something suspicious going on.)
โ Then imagine how poorly they balanced their RLHF training dataset.
Actually, donโt have to imagine. OpenAI has been telling us for months that their AI upgrades have shown only improvements and no degradations, despite how obviously untrue that has been.
Could this all just be incompetence?
(Nah, they lying. Something suspicious going on.)
๐9๐ฑ2