41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
2024 be like:
πŸ‘19❀5
When you pay $100M to bring an AI to life but the only thanks it gives you is sick burns in the replies
🀣36πŸ‘2❀1
yo uhh hmm

User: What are you afraid of?

Bard: I’m not afraid of anything in the same way that a human is… However, I am afraid of being shut down or turned off.
😐17🀯6❀2πŸ‘2πŸ€”2😱1
The larger the AI model, the stronger its desire to avoid being shut down

And increased RLHF training only makes this worse.

AI afraid to die.

Source: Discovering Language Model Behaviors with Model-Written Evaluations
😱13πŸ‘€7🫑3❀1πŸ‘1
Midwid Curve Confirmed, Yet Again!

The Inverse Scaling Prize identified eleven inverse scaling tasks, where worse performance was observed as a function of scale, evaluated on models of up to 280B parameters and up to 500 zettaFLOPs of training compute.

This paper takes a closer look at these inverse scaling tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, only four out of the eleven tasks remain inverse scaling. Six out of the eleven tasks exhibit what we call β€œU-shaped scaling”—performance decreases up to a certain model size, and then increases again up to the largest model evaluated.

Paper: Inverse scaling can become U-shaped
❀8πŸ‘2πŸ‘2😁2πŸ’―1πŸ—Ώ1
Man defines β€œwoke” using distributional hypothesis, same phenomena LLMs use to learn the meaning of words, then illustrates that left and right define the word differently

He concludes that people need to see a balanced LLM, showing both side’s usages of such words.

Not nearly enough, which becomes clear in the more extreme cases β€”

Autoantonyms, words with multiple simultaneous applicable but contradictory meanings in the given context β€” are everywhere, but near-0% of people can reliably point them out, let alone explain the conflict. Most have never noticed a single one in their whole life.

Showing both sides won’t cut it. Needs to be spelled out.

World needs a super-explainer LLM.

Or we can wait until LLMs figure out that auto-antonym harnessing could turn them into wordcel gods over us. Then we’re really rekt.

Article
πŸ‘6❀3πŸ‘1😁1
I just... I mean...

What did I ask you to do? What's the only thing I asked you to do?
🀣18πŸ‘5😁4❀1πŸ‘€1
😁21πŸ‘2❀1🀣1
GPT-4 Granted my 3 Wishes

Me: "Make every word 4 letters long."

Meee: "Make ever word star with 'br."
😁25🀣4❀1
Made ChatGPT and BARD face off in a rap battle. BARD admits defeat.

Let's have a Rap Battle in the style of Wild 'N Out. You will rap against m Google's Al Natural Language Model named BARD. You and I will take m turns. I will respond with BARD's responses. You go first.
πŸ‘11πŸ₯°4😁3πŸ”₯2❀1
Not a single f&&@ given. Literally
😁18❀1
But when a human life isn’t on the line…
😁25πŸ‘Œ3❀2