41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Living in the future
๐Ÿ˜ฑ17๐Ÿฅฐ4๐Ÿ˜ข3๐Ÿซก2
Token Cost of GPT-4 level models over time

Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months = 240x cheaper.

โ€” FWIW, none of the cheap ones are quite to the quality of real GPT-4 on coding, the only real job where AI matters right now, and who cares if theyโ€™re cheaper when theyโ€™re not yet quite good enough to really do the jobs they could.

Industry wasted last 2 years making models cheaper rather than pushing forward the state of the art.
๐Ÿ’ฏ22๐Ÿ‘5๐Ÿ”ฅ2
๐Ÿ‡ฎ๐Ÿ‡ช In Ireland, an elected councillor from India uses ChatGPT to generate her political opinions

Tweet
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿคฃ36๐Ÿคก14๐Ÿ˜ฑ5๐Ÿ‘4๐Ÿคฌ1๐Ÿ˜ก1
Rumors of ChatGPTโ€™s demise

Article
๐Ÿ˜Ž11๐Ÿคก10๐Ÿ‘3๐Ÿ˜ฑ3๐ŸŽ‰2๐Ÿ‘1
Rumors of ChatGPTโ€™s demise have been greatly exaggerated
๐Ÿ—ฟ30๐Ÿ”ฅ9๐Ÿ‘4๐Ÿ‘Œ4๐Ÿคก4๐Ÿ‘2๐ŸŽ‰2
OpenAI launches new ChatGPT model, o1, with reasoning capabilities of a PhD student

โ€œSimilar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem. Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isnโ€™t working. This process dramatically improves the modelโ€™s ability to reason.โ€œ

OpenAI Announcement
๐Ÿ‘€16๐Ÿ‘9๐ŸŽ‰3๐Ÿ˜ˆ1
OpenAI decides to hide the chain-of-thought reasoning from the users, in the name of โ€œsafetyโ€?

Screw you OpenAI

Article
๐Ÿคฌ20๐Ÿคก7๐Ÿ‘3๐Ÿฅฐ2๐Ÿ˜ก2๐Ÿคฃ1๐Ÿ˜ˆ1
OpenAI confirms that the more you spend on training and inference, the better the modelโ€™s accuracy

Just spend more money.

Bitter lesson.

Article
๐Ÿ’ฏ13๐Ÿ‘4๐Ÿคก2๐Ÿคฃ2๐Ÿ˜ˆ2
Just 30 messages weekly limit for new o1 model

Unsurprised.

O1 uses an obscene amount of resources, as the announcement confirms.

At the same time, this always was the correct way forward.

There are no rich, energy-poor nations.

Just use vastly more compute.

Bitter lesson
๐Ÿ‘11๐Ÿคฌ9๐Ÿ’ฏ6๐Ÿ˜ฑ2๐ŸŽ‰1๐Ÿคก1๐Ÿ˜ˆ1
Yes
๐Ÿ’ฏ16๐Ÿคฌ6๐Ÿ‘4๐Ÿ˜จ2๐Ÿ‘1๐Ÿ˜ˆ1
Hiding the Chain-Of-Thought Reasoning from Users Will Enable OpenAI to Better Manipulate the Users

They donโ€™t even hide it, they openly admit it.

OpenAI will now be hiding the AI reasoning in order to better enable manipulating the users.

Purpose of a system is what it does.

Article
๐Ÿคฌ24๐Ÿคก16๐Ÿ‘11๐Ÿ’ฏ2๐Ÿซก2๐Ÿ˜ˆ1
OpenAIโ€™s new o1 model is doing very well on custom IQ tests

Tracking AI Page
๐Ÿ‘€25๐Ÿ‘9๐Ÿ”ฅ9๐Ÿคก5๐Ÿคฏ4๐Ÿ˜ˆ1๐Ÿค1
โ€œThis isnโ€™t a new modelโ€
๐Ÿ‘€45๐Ÿคฃ28๐Ÿ‘19๐Ÿคก17๐Ÿพ4๐Ÿ˜ˆ1
4 Days
๐Ÿคฃ155๐Ÿ˜ฑ19๐Ÿ”ฅ9๐Ÿ˜ข7๐Ÿ‘Œ7๐Ÿ‘5๐Ÿพ5๐Ÿคฌ3๐Ÿ‘€3๐Ÿ˜2๐Ÿ’˜2
Chat GPT
The Bitter Lesson
We could have been talking to our desktop computers in English since the 90s!

"Somebody got one of the small versions of Llama to run on Windows 98โ€ฆโ€

โ€œWe could've been talking to our computers in English for the last 30 years"

- Marc Andreessen

Correct.

The hardware already existed, for decades.

What stopped us?

Extreme aversion to investing money into training much larger AI models.

No one was willing to invest the many millions needed to train an AI model of this size.

In fact, even a decade later in 2011, people were still hardly willing to spend more than TEN DOLLARS on electricity costs to train a state-of-the-art model, e.g. the AlexNet image model

Many truly under-estimate how unwilling to people have been to spend money on AI training, until very recently

And this wasnโ€™t unrealized, many of us had screamed this for decades.

No one cared.

Incredible testiment to manโ€™s unwillingness to invest in certain critical areas of future tech.

โ€” happens in AI, advanced market mechanisms, proof systems, and a few other similar areas, that are unquestionably the future.

We could have been talking to our desktop computers in English since the 90s

Bitter Lesson
๐Ÿ‘15๐Ÿ’ฏ5๐Ÿ‘Œ3๐Ÿ‘2๐Ÿคฏ2๐Ÿ˜ˆ1
This media is not supported in your browser
VIEW IN TELEGRAM
We could've been talking to our computers in English for the last 30 years

35.9 tok/sec on a 26 year old Windows 98 Intel Pentium II CPU, with 128MB RAM

Using a 260K LLM with Llama-architecture
๐Ÿคฏ25๐Ÿ”ฅ4๐Ÿ’ฏ4๐Ÿ‘3๐Ÿ˜ˆ1
This media is not supported in your browser
VIEW IN TELEGRAM
We could've been talking to our computers in English for the last 30 years

Somebody got one of the small versions of Llama to run on Windows 98โ€ฆ

We could've been talking to our computers in English for the last 30 years

- Marc Andreessen
๐Ÿคฏ9๐Ÿ—ฟ5๐Ÿ‘4๐Ÿ’ฏ3๐Ÿ˜ˆ1
IF NO ONE COMES FROM THE FUTURE TO STOP YOU FROM DOING IT

THEN HOW BAD CAN IT BE?
๐Ÿคฃ43๐Ÿ˜Ž8๐Ÿ’”4๐Ÿซก3๐Ÿ‘Œ2๐Ÿ˜จ2๐Ÿ‘1