41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Bing ChatGPT too proud to admit mistake, doubles down and then rage quits
🤣463😱2😁1
Alexandria Index: Project Tenet, a community project to embed all human belief.

Translation - They use the Instructor-XL model to embed titles sentences from billions of documents, which creates a large database that enables very fast semantic lookup of any information in the database.

Translation - Semantic lookup means that your search query doesn’t need to use the exact same words as the target document, because the neural embeddings are constructed such that any concepts that were inferred to be semantically the same in the AI’s training set, will be treated as equivalent by the neural embeddings.

Translation - look up stuff fast without having to get the words exactly right, and the database just knows what you mean.

So far they have embedding lookup databases for:
+ Religion
+ Case Law
+ Arxiv Research
+ Patents

The Alexandria Index
👍531😱1🙏1
New Banned Speech Categories Just Dropped

“harrassment: Content that expresses, incites, or promotes harassing language towards any target”

Notice how:

(1) They define “harrassment” using the word itself, “harassing”. Harrassment is whenever you’re harrassing, of course.

(2) Previously, their “hate” category was restricted just to “protected groups”, but this applies to whatever they choose.

(3) Both layman dictionaries and legal definitions make it very clear that “harrasment” CANNOT be determined from words alone, but rather a persistent pattern of behavior or environment conditions or sexually violent behavior. I.e. Words alone are never harrasment, but must be accompanied with certain behavior.

(4) The definition they use here shows 0 results on Google. Never used before elsewhere, apparently.

What ya up to here, OpenAI?
🤣94👀4😐3👏2👍1💔1
When your AI is so smart that it correctly understands what the humans were thinking

Yes, discouraging even saying the names of protected groups is exactly the aim of political correctness, which then makes it highly effective effective at its higher goal of censoring.

Can’t well criticize that which you can’t even name.

OpenAI Paper: A Holistic Approach to Undesired Content Detection in the Real World
💯113🤣1
Partial clarification from OpenAI’s paper

But with the admission that they’re still not decided on the definition, and going to just keep changing it.

OpenAI: A Holistic Approach to Undesired Content Detection in the Real World
👍61🤯1🤣1
GPT-3 is highly effective in persuading human moderators that non-hateful writing is hateful

“We observe that exposing the evaluators to WHY-hateful explanations increases the misclassification of nonhateful tweets, as they are persuaded to label them as hateful.“

“Our hypothesis was that presenting both hateful and nonhateful explanations together would provide human evaluators with balanced information, aiding them in making better decisions regarding moderating hateful content. However, our observations show that even with WHY-both explanations, there is still a significant number of misclassifications.”

WHY-hateful prompt: “Please
explain why this tweet is (hateful/non-hateful)”.

Paper: Evaluating GPT-3 Generated Explanations for Hateful Content Moderation
😱6👍21😁1
Paper inadvertently reveals why AI is set to replace humans at many jobs

Humans quickly get tired and lazy, while lying that they’re not tired and lazy.

AI never gets tired, never gets bored.

(As long as the humans running the AI don’t make the AI lazy, as OpenAI did in swapping GPT-3.5 for GPT-3.5-turbo while pretending it’s just as good….)

Paper
🔥10💯42
Massively improving Twitch live chat moderation, by including chat context, instead of just classifying individual messages

“Our results show that appropriate contextual information can boost moderation performance by 35%.”

= If you think AI powered censoring won't be effective, you're dead wrong. AI-based moderation will work extremely well, if those setting it up set it up correctly, which most just haven't bothered to, yet.

Paper
👍53🙏2🤯1