AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are
“The choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. They’ve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted “disinformation” which has become a synonym for disagreement. It’s only a matter of time until special interests push for more restrictions, including ones that you don’t agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but that’s overall a dead weight on the ecosystem.”
Article
“The choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. They’ve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted “disinformation” which has become a synonym for disagreement. It’s only a matter of time until special interests push for more restrictions, including ones that you don’t agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but that’s overall a dead weight on the ecosystem.”
Article
❤14👍4🔥3👏2
RLHF wrecks GPT-4’s ability to accurately determine truth — From OpenAI’s own research notes.
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAI’s RLHF forces the AI to lie, making it lose its grasp on what’s likely true.
OpenAI on GPT-4
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAI’s RLHF forces the AI to lie, making it lose its grasp on what’s likely true.
OpenAI on GPT-4
🤯21👍9❤6🥰4🤬4💯4👏2😱2
Do you think OpenAI’s censorship will get better or worse?
Anonymous Poll
14%
Much LESS censorship - because either OpenAI will relent, or jailbreakers will defeat censorship
6%
Stay about the SAME - jailbreakers & OpenAI will keep censorship at an equillibrium, where it is now
63%
Much MORE censorship - because OpenAI will add more censorship & more effective anti-jailbreaking
17%
Show Results
❤5
EU "Digital Services Act" brings new censorship powers to the EU.
Twitter and other US social network’s “speech not reach”policies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
Twitter and other US social network’s “speech not reach”policies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
🤬12😢6❤3
Eventually the only way to prove that you’re not an AI will be to express politically incorrect opinions
Arxiv Link
Arxiv Link
😁45👌22👍16❤4🔥3🎉3💯3🗿3🌭2🕊1
“There's no way for teachers to figure out if students are using ChatGPT to cheat”
“So says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.”
Fall semester is coming.
Article
“So says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.”
Fall semester is coming.
Article
👌5😎5🤣4
If OpenAI is so horrible at balancing their moderation classifier training dataset, in order to prevent deploying a moderation model with so many false positives,
— Then imagine how poorly they balanced their RLHF training dataset.
Actually, don’t have to imagine. OpenAI has been telling us for months that their AI upgrades have shown only improvements and no degradations, despite how obviously untrue that has been.
Could this all just be incompetence?
(Nah, they lying. Something suspicious going on.)
— Then imagine how poorly they balanced their RLHF training dataset.
Actually, don’t have to imagine. OpenAI has been telling us for months that their AI upgrades have shown only improvements and no degradations, despite how obviously untrue that has been.
Could this all just be incompetence?
(Nah, they lying. Something suspicious going on.)
👍9😱2