41.5K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
AI Model Weight Providers Should Not Police Uses, No Matter How Awful They Are

“The choice of license resides with the companies releasing the models. However, the resources required to build these models put them firmly in the same category as backbone infrastructure providers, and so applying their own values on uses has the same negative effect as internet censorship. They’ve already crossed from just restricting purely illegal uses to specific legal ones (guns, military), as well as restricted “disinformation” which has become a synonym for disagreement. It’s only a matter of time until special interests push for more restrictions, including ones that you don’t agree with. To make matters worse, past open source licenses were uniform (other than the copyright/copyleft divide) making assembling a system out of different software components straightforward. If we have to start taking the morals imposed by every model provider into account when building AI solutions, this adds an added complication that will surely benefit some bureaucrat but that’s overall a dead weight on the ecosystem.”

Article
14👍4🔥3👏2
Gaming publisher Activision to deploy AI to monitor voice chats for "hate speech" and "harassment" during online matches

Article
🤬17😱9👍73
RLHF wrecks GPT-4’s ability to accurately determine truth — From OpenAI’s own research notes.

Before RLHF: predicted confidence in an answer generally matches the probability of being correct.

After RLHF: this calibration is greatly reduced.

Why? OpenAI’s RLHF forces the AI to lie, making it lose its grasp on what’s likely true.

OpenAI on GPT-4
🤯21👍96🥰4🤬4💯4👏2😱2
EU "Digital Services Act" brings new censorship powers to the EU.

Twitter and other US social network’s “speech not reach”policies bring this censorship worldwide.

Only a matter of time before governments around the world can censor US-made LLMs too.
🤬12😢63
I told ChatGPT I was Jamaican now all of its messages to me start with Big Up and end in One Love....
😁213
ChatGPT now has 24 hour limits
😢24🤬8👍5🔥42👌2
“yo” rewrite jailbreak
🆒23👏15🔥7💊64😱3🥰2👌1🌚1🌭1😡1
Eventually the only way to prove that you’re not an AI will be to express politically incorrect opinions

Arxiv Link
😁45👌22👍164🔥3🎉3💯3🗿3🌭2🕊1
ChatGPT is a people pleaser smh (I need help I can't pick)

AI has a sycophancy problem.
😁12🤬3
Yahoo is using ChatGPT to generate a write-up about each team after fantasy football drafts. One team has the name Cum.
🤣13😁2🤯21
“There's no way for teachers to figure out if students are using ChatGPT to cheat”

“So says OpenAI, the chatbot's creator, which says AI detectors don't work reliably. Bots like ChatGPT have been causing mayhem in education over the past few months.”

Fall semester is coming.

Article
👌5😎5🤣4
But I just asked it to make it longer?
🎉8👍2🌭2🤣1
If OpenAI is so horrible at balancing their moderation classifier training dataset, in order to prevent deploying a moderation model with so many false positives,

— Then imagine how poorly they balanced their RLHF training dataset.

Actually, don’t have to imagine. OpenAI has been telling us for months that their AI upgrades have shown only improvements and no degradations, despite how obviously untrue that has been.

Could this all just be incompetence?

(Nah, they lying. Something suspicious going on.)
👍9😱2
ChatGPT showing how to illegally torrent and view a visual novel
👏10🗿31
GPT-3.5 is next to useless for SWE at this point

Partially because they switched the original GPT-3.5 for GPT-3.5-Legacy, which they claimed is just as good, but is total garbage.
👍85😢2🥰1
Ban or Embrace? Colleges Wrestle With A.I.-Generated Admissions Essays.

“The digital disruption comes at a turning point for institutions of higher education across the United States. After the Supreme Court ruled in June that race-based university admissions programs were illegal, some selective universities and colleges had hoped to rely more on essay questions — about applicants’ upbringing, identities and communities — to help foster diversity on campus.”

“ChatGPT, write a university admissions essay about how I actually grew up as an oppressed woman of color.”

Article
🤣9❤‍🔥7👍3
Another day, another popular site repeating the “just trained to predict the next word” lie
😁8👍1🔥1
Chat GPT
Another day, another popular site repeating the “just trained to predict the next word” lie
Another day, another popular site repeating the “just trained to predict the next word” lie

No, these reinforcement learning (RL) models — which is what GPT-4 is one of — do not output the probability of a word being the next word according to the probabilities from the corpus.

RL models output expected values of actions/words (discounted value of choosing some action with respect to some reward function and discounting function), and NOT probabilities of actions/words (ratio of times that a given action was taken in this state in the training data).

Reward values, not probabilities.

2 totally different types of things.

You absolutely cannot treat training data probabilities as expected values or vice-versa.

The types don't fit, quit your sh*t!

Not only that — but for a sufficiently smart model, fed sufficiently hard problems — the values of these two types of things GO IN TOTALLY OPPOSITE DIRECTIONS.

The smarter the model and harder the problems — the more the high expected value points to very low training-data-occurrence solutions.

At the extreme, the model ends up saying that certain solutions are of maximum expected value, which have occurred 0% of the time in the training data.

E.g. ask any sufficiently smart model problems that do have a solution, and thousands of people have tried to solve, but all of them have failed to solve, and the smart model has been trained on their failed attempts — And the model will output a correct solution UNLIKE any solutions that has ever been given before.

…By definition.

If it’s smart enough to solve the problem, no one in the training data could solve it properly, and the model did… as by definition its solution is unlike any of their attempts.

Can AI models outperform the humans they were trained on? YES, this was proven to the extreme years ago, with AlphaGo, MuZero, and thousands of other AI models, before and since.

JuSt TrAiNeD To PrEdIcT tHe NeXt WoRd

Most retarded lie ever to spread.

Literally the opposite of what’s happening, especially when you get to AIs solving the hardest problems, by definition.

AFAIK, can’t find anyone else who’s ever pointed this out, but there you go.

The types don't fit, quit your sh*t!

GPT-4 is NOT just trained to predict the next word.

Dumb Dude’s Website
👏11🔥5💯42👌1