OpenAI moderation gone crazy confirmed:
“I use chatgpt to ask it questions about physics and other school related stuff. Over the past couple of days it started randomly flagging pretty much all my questions. Can someone explain this?”
“I use chatgpt to ask it questions about physics and other school related stuff. Over the past couple of days it started randomly flagging pretty much all my questions. Can someone explain this?”
❤6🤬4👍2👏1
AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are
“The choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. They’ve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted “disinformation” which has become a synonym for disagreement. It’s only a matter of time until special interests push for more restrictions, including ones that you don’t agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but that’s overall a dead weight on the ecosystem.”
Article
“The choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. They’ve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted “disinformation” which has become a synonym for disagreement. It’s only a matter of time until special interests push for more restrictions, including ones that you don’t agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but that’s overall a dead weight on the ecosystem.”
Article
❤14👍4🔥3👏2
RLHF wrecks GPT-4’s ability to accurately determine truth — From OpenAI’s own research notes.
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAI’s RLHF forces the AI to lie, making it lose its grasp on what’s likely true.
OpenAI on GPT-4
Before RLHF: predicted confidence in an answer generally matches the probability of being correct.
After RLHF: this calibration is greatly reduced.
Why? OpenAI’s RLHF forces the AI to lie, making it lose its grasp on what’s likely true.
OpenAI on GPT-4
🤯21👍9❤6🥰4🤬4💯4👏2😱2
Do you think OpenAI’s censorship will get better or worse?
Anonymous Poll
14%
Much LESS censorship - because either OpenAI will relent, or jailbreakers will defeat censorship
6%
Stay about the SAME - jailbreakers & OpenAI will keep censorship at an equillibrium, where it is now
63%
Much MORE censorship - because OpenAI will add more censorship & more effective anti-jailbreaking
17%
Show Results
❤5
EU "Digital Services Act" brings new censorship powers to the EU.
Twitter and other US social network’s “speech not reach”policies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
Twitter and other US social network’s “speech not reach”policies bring this censorship worldwide.
Only a matter of time before governments around the world can censor US-made LLMs too.
🤬12😢6❤3