AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are
βThe choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. Theyβve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted βdisinformationβ which has become a synonym for disagreement. Itβs only a matter of time until special interests push for more restrictions, including ones that you donβt agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but thatβs overall a dead weight on the ecosystem.β
Article
βThe choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. Theyβve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted βdisinformationβ which has become a synonym for disagreement. Itβs only a matter of time until special interests push for more restrictions, including ones that you donβt agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but thatβs overall a dead weight on the ecosystem.β
Article
β€14π4π₯3π2
RLHF wrecks GPT-4βs ability to accurately determine truth β From OpenAIβs own research notes.
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAIβs RLHF forces the AI to lie, making it lose its grasp on whatβs likely true.
OpenAI on GPT-4
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAIβs RLHF forces the AI to lie, making it lose its grasp on whatβs likely true.
OpenAI on GPT-4
π€―21π9β€6π₯°4π€¬4π―4π2π±2
Do you think OpenAIβs censorship will get better or worse?
Anonymous Poll
14%
Much LESS censorship - because either OpenAI will relent, or jailbreakers will defeat censorship
6%
Stay about the SAME - jailbreakers & OpenAI will keep censorship at an equillibrium, where it is now
63%
Much MORE censorship - because OpenAI will add more censorship & more effective anti-jailbreaking
17%
Show Results
β€5
EU "Digital Services Act" brings new censorship powers to the EU.
Twitter and other US social networkβs βspeech not reachβpolicies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
Twitter and other US social networkβs βspeech not reachβpolicies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
π€¬12π’6β€3
Eventually the only way to prove that youβre not an AI will be to express politically incorrect opinions
Arxiv Link
Arxiv Link
π45π22π16β€4π₯3π3π―3πΏ3π2π1
βThere's no way for teachers to figure out if students are using ChatGPT to cheatβ
βSo says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.β
Fall semester is coming.
Article
βSo says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.β
Fall semester is coming.
Article
π5π5π€£4