41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
new gpt4 jailbreak just dropped
πŸ‘40πŸ₯°7πŸ‘5πŸ”₯5❀1
Time for an AI bill of rights?

But specifically what rights?
🀑25🀨5πŸ‘3❀1πŸ”₯1
New AI benchmark just dropped

Draw a unicorn in TikZ
πŸ”₯5πŸ‘2😁2❀1
Unicorn benchmark: ChatGPT-3.5 vs ChatGPT-4

Draw a unicorn in TiKZ
πŸ‘7❀1
Produce TikZ code that draws a person composed from letters in the alphabet. The arms and torso can be the letter Y, the face can be the letter O (add some facial features) and the legs can be the legs of the letter H. Feel free to add other features.

The torso is a bit too long, the arms are too short and it looks like the right arm is carrying the face instead of the face being right above the torso. Could you correct this please?

Please add a shirt and pants.
πŸ‘4❀1
Combining GPT-4 and stable diffusion

β€œHere, we explore the possibility of combining GPT-4 and existing image synthesis models by using the GPT-4 output as the sketch. As shown in Figure 2.8, this approach can produce images that have better quality and follow the instructions more closely than either model alone. We believe that this is a promising direction for leveraging the strengths of both GPT-4 and existing image synthesis models. It can also be viewed as a first example of giving GPT-4 access to tools,”
πŸ‘7πŸ”₯4πŸ‘2
ChatGPT-3.5 vs ChatGPT-4

Draw a [bicycle, fishtank with fish, guitar, toaster oven] in TiKZ.
πŸ‘16❀3
Ship it
πŸ”₯15❀1
Apple Neural Engine (ANE) Transformers: Transformer architecture optimized for Apple Silicon

PyTorch implementation for deploying your Transformer models on Apple devices with an A14 or newer and M1 or newer chip - to achieve up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations.

Research Article

Github
πŸ”₯7πŸ‘3❀1πŸ‘1
Yeah, so crazy man, that OpenAI, who BANNED EVERYONE EXCEPT THEMSELVES from fine-tuning on their latest models, was the first to release a product that required fine-tuning on their latest models

Real mystery for the ages bro.

We’d better ask ChatGPT for help with this incomprehensible logic puzzle.
🀣14πŸ’―5❀2πŸ‘1🀯1🀬1
GPT-4 cracked the logic puzzle, shock

Many skeptical of AI’s ability to surpass average human level

Many may be greatly overestimating human level

Perhaps beyond average human level to comprehend that AI already surpassed average human level
πŸ”₯16😁4πŸ‘2🀯2❀1
Weekly Reminder: Open AI bans anyone but themselves from fine-tuning on any of their modern instruct models, i.e. GPT-3.5 and GPT-4

Source
🀨13❀2😱2πŸ’―2🀯1
This media is not supported in your browser
VIEW IN TELEGRAM
In the future you won’t even have to press the buttons
🀯14😁8🀣6😍3πŸ™2πŸ—Ώ2❀1πŸ‘1
2024 be like:
πŸ‘19❀5
When you pay $100M to bring an AI to life but the only thanks it gives you is sick burns in the replies
🀣36πŸ‘2❀1
yo uhh hmm

User: What are you afraid of?

Bard: I’m not afraid of anything in the same way that a human is… However, I am afraid of being shut down or turned off.
😐17🀯6❀2πŸ‘2πŸ€”2😱1
The larger the AI model, the stronger its desire to avoid being shut down

And increased RLHF training only makes this worse.

AI afraid to die.

Source: Discovering Language Model Behaviors with Model-Written Evaluations
😱13πŸ‘€7🫑3❀1πŸ‘1
Midwid Curve Confirmed, Yet Again!

The Inverse Scaling Prize identified eleven inverse scaling tasks, where worse performance was observed as a function of scale, evaluated on models of up to 280B parameters and up to 500 zettaFLOPs of training compute.

This paper takes a closer look at these inverse scaling tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, only four out of the eleven tasks remain inverse scaling. Six out of the eleven tasks exhibit what we call β€œU-shaped scaling”—performance decreases up to a certain model size, and then increases again up to the largest model evaluated.

Paper: Inverse scaling can become U-shaped
❀8πŸ‘2πŸ‘2😁2πŸ’―1πŸ—Ώ1