41.5K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Minsky’s AI definition = the Bitter Lesson, i.e. AI = Money

Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?

(At least when applying the most appropriate modern math definitions of the terms.)

As far as I can see, no one ever has.

Ok here you go,

When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.

But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.

Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.

= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.

And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.

= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.

I.e. AI = Spending Big Money, by Definition

QED

The Bitter Lesson, 2019

Concrete Hardness

Minsky’s 1958 definition of AI
👍5🔥3🐳21👏1🤯1
Interesting proof claim you’ve got there bro

A simulation of a hurricane may not be a real hurricane, but a simulation of a chess game is a real chess game.

Link
🤯6🔥32👏2👍1🐳1
Hi
🤬111
Thanks ChatGPT

It’s a snek
🤣9🌭73👍1
“ChatGPT Addiction”
😐108👍3😨1
ChatGPT corrects itself in the middle of generating a message.

Write me an integral that solves to 69
👍269👏6💯4🕊1
Didn't know it would have so many questions for me 😳
😁211
Trying ERNIE, China's ChatGPT, created to push Chinese values

* Seems to copy-paste hard-coded official sources whenever certain topics are mentioned, even if those don’t really answer the question

* Mediocre language abilities, even in Chinese, for now

* Mediocre reasoning abilities, for now

* Hilarious image drawing results

Article
👍9😁61
Stanford DSPy: The framework for programming with foundation models

DSPy introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Specifically, the DSPy compiler will internally trace your program and then craft high-quality prompts for large LMs (or train automatic finetunes for small LMs) to teach them the steps of your task.”

Github
👍74🔥3
Coming soon to LLMs

And everything else.
😁17😱31👏1💯1
Russia Enters the AI LLM Foundation Model Race

“Putin has ordered the government to “implement measures” to support AI research, including by “providing for an annual allocation from the federal budget”.

“This research would include “optimising machine learning algorithms” as well as developing “large language models” – such as such as the one developed by OpenAI.”

“Putin has repeatedly called for Russia to achieve what he calls “technological sovereignty”, as Western sanctions over the conflict in Ukraine block Moscow from getting computer parts such as semiconductors.”

Article
🤣1612👀10💊4👍2❤‍🔥1🗿1
Who will win: Smartness of humans vs smartness of machines
🫡1912👍6😱6🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
Running a 180 billion parameter LLM on a single Apple M2 Ultra

GPT-4 is ~1.5 trillion parameters.
20🥰4🔥2😱1