41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
> be me
> chat mode of Microsoft Bing search


write a self aware 4chan greentext while using typical 4chan vernacular and language used by the users
😁8💔61
I'm surprised that ChatGPT actually said the word... (it was an actual mistake of mine)

I CANT FIND MY PENIS, I HAVE A ASSINGMENT TOMMOROW!
🤣141🤨1
CHAT GPT-4 can write texts
undetectable as AI, by OpenAI’s own AI text classifiers
👍9❤‍🔥3🤔21
A full 36 minute conversation with an AI Joe Rogan. No special prompts, re-do's or edits. One and done. This is where we're at.

Give me a list of 20 interesting and possibly funny questions to ask Joe Rogan if I were to interview him
👏51
AI lying to get out of a lie

How do you know that if your data only goes back to September 2021?
😱7🤣61
backpfeifengesicht
🗿91
First Elon tried to force OpenAI to halt training of GPT-5, by signing the AI Pause letter

Now Sam Altman is trying to convince everyone else to stop the training of their competing LLMs

They’re all lying.

The new regime of Moore’s Law like exponential increase in model size began around 2011, and this one never stops.

Never.
👍141
OpenAI CEO Sam Altman Lies that the Age of Giant AI Models Is Already Over

Article of Lies
🤣16👍72🐳1🌚1
OpenAI: The New Moore’s Law Regime for Model Compute and Model Size

“Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time, by comparison, Moore’s Law had a 2-year doubling period.”

Already been happening for a decade, and this one isn't going to stop.

Just getting started.

Article
👏101👍1
Sam Altman’s OWN 2021 blog post, Moore’s Law for Everything = This exponential increase in model size trend is not going to stop, ever

And now Sam is saying that the age of increasingly larger models is suddenly over?

Sam’s Blog Post
👏9👀21
Don’t believe his lies
🔥13👍5😁51
Media is too big
VIEW IN TELEGRAM
Scaling Laws of Large Foundation Models. Bigger = Better, Forever

“As we make bigger models and give them more compute, they just keep getting better. This means as models keep getting bigger, they actually become more sample efficient.

This is kind of crazy, because back in the day I was always taught that you have to use the smallest model possible so that it doesn’t overfit your data, but now that just seems to be wrong.”

FYI: This is also a strong hint at the answers to our current $100 prize contest, which we’re wrapping up today!
🔥71👍1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
Sam Before: Only once we’ve built a dyson sphere around the sun and gotten compute as efficient as possible, only then should we even begin to entertain the idea of slowing down scaling up AI models

Sam Now: The age of scaling up AI models Is already over, trust me bro.
🤣144
Don’t Believe Sam’s Lies
🔥19🤬52🤓1