Minsky’s AI definition = the Bitter Lesson, i.e. AI = Money
Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?
(At least when applying the most appropriate modern math definitions of the terms.)
As far as I can see, no one ever has.
Ok here you go,
When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.
But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.
Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.
= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.
And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.
= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.
I.e. AI = Spending Big Money, by Definition
QED
The Bitter Lesson, 2019
Concrete Hardness
Minsky’s 1958 definition of AI
Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?
(At least when applying the most appropriate modern math definitions of the terms.)
As far as I can see, no one ever has.
Ok here you go,
When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.
But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.
Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.
= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.
And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.
= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.
I.e. AI = Spending Big Money, by Definition
QED
The Bitter Lesson, 2019
Concrete Hardness
Minsky’s 1958 definition of AI
👍5🔥3🐳2❤1👏1🤯1
Trying ERNIE, China's ChatGPT, created to push Chinese values
* Seems to copy-paste hard-coded official sources whenever certain topics are mentioned, even if those don’t really answer the question
* Mediocre language abilities, even in Chinese, for now
* Mediocre reasoning abilities, for now
* Hilarious image drawing results
Article
* Seems to copy-paste hard-coded official sources whenever certain topics are mentioned, even if those don’t really answer the question
* Mediocre language abilities, even in Chinese, for now
* Mediocre reasoning abilities, for now
* Hilarious image drawing results
Article
👍9😁6❤1
Stanford DSPy: The framework for programming with foundation models
“DSPy introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Specifically, the DSPy compiler will internally trace your program and then craft high-quality prompts for large LMs (or train automatic finetunes for small LMs) to teach them the steps of your task.”
Github
“DSPy introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Specifically, the DSPy compiler will internally trace your program and then craft high-quality prompts for large LMs (or train automatic finetunes for small LMs) to teach them the steps of your task.”
Github
👍7❤4🔥3
Russia Enters the AI LLM Foundation Model Race
“Putin has ordered the government to “implement measures” to support AI research, including by “providing for an annual allocation from the federal budget”.
“This research would include “optimising machine learning algorithms” as well as developing “large language models” – such as such as the one developed by OpenAI.”
“Putin has repeatedly called for Russia to achieve what he calls “technological sovereignty”, as Western sanctions over the conflict in Ukraine block Moscow from getting computer parts such as semiconductors.”
Article
“Putin has ordered the government to “implement measures” to support AI research, including by “providing for an annual allocation from the federal budget”.
“This research would include “optimising machine learning algorithms” as well as developing “large language models” – such as such as the one developed by OpenAI.”
“Putin has repeatedly called for Russia to achieve what he calls “technological sovereignty”, as Western sanctions over the conflict in Ukraine block Moscow from getting computer parts such as semiconductors.”
Article
🤣16❤12👀10💊4👍2❤🔥1🗿1
This media is not supported in your browser
VIEW IN TELEGRAM
Running a 180 billion parameter LLM on a single Apple M2 Ultra
GPT-4 is ~1.5 trillion parameters.
GPT-4 is ~1.5 trillion parameters.
❤20🥰4🔥2😱1