OpenAI’s biggest challenge? Figuring out what AI will even be used for, and how.
Not just AI assistants.
Clear, prescient visions of the future exceedingly hard to come by, in this area.
Wired Article
Not just AI assistants.
Clear, prescient visions of the future exceedingly hard to come by, in this area.
Wired Article
🔥10👍2👏2❤1👌1
Bro what, are you retarded
Weird hallmark of morons trying to sound smart — trying to scold people for using old, widely-accepted general terms, in a general way — saying that now those general terms can only be used to mean some specific narrower thing.
Whether its crypto dudes saying that you’re not allowed to use “token” as a general term for all types of cryptocurrencies,
Or AI dudes trying to tell you that you’re not allowed to use “AI” for anything but “complex data analysis” (wtf?)
Scolding people for using an obviously general term generally.
There are circumstances where more specific words are necessary, but not the same as a blanket ban on using any general words. BSers don’t get the difference.
Hallmark of bullshitters.
Weird hallmark of morons trying to sound smart — trying to scold people for using old, widely-accepted general terms, in a general way — saying that now those general terms can only be used to mean some specific narrower thing.
Whether its crypto dudes saying that you’re not allowed to use “token” as a general term for all types of cryptocurrencies,
Or AI dudes trying to tell you that you’re not allowed to use “AI” for anything but “complex data analysis” (wtf?)
Scolding people for using an obviously general term generally.
There are circumstances where more specific words are necessary, but not the same as a blanket ban on using any general words. BSers don’t get the difference.
Hallmark of bullshitters.
👍11💯4❤3🤓1
Minsky’s AI definition = the Bitter Lesson, i.e. AI = Money
Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?
(At least when applying the most appropriate modern math definitions of the terms.)
As far as I can see, no one ever has.
Ok here you go,
When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.
But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.
Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.
= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.
And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.
= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.
I.e. AI = Spending Big Money, by Definition
QED
The Bitter Lesson, 2019
Concrete Hardness
Minsky’s 1958 definition of AI
Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?
(At least when applying the most appropriate modern math definitions of the terms.)
As far as I can see, no one ever has.
Ok here you go,
When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.
But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.
Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.
= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.
And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.
= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.
I.e. AI = Spending Big Money, by Definition
QED
The Bitter Lesson, 2019
Concrete Hardness
Minsky’s 1958 definition of AI
👍5🔥3🐳2❤1👏1🤯1
Trying ERNIE, China's ChatGPT, created to push Chinese values
* Seems to copy-paste hard-coded official sources whenever certain topics are mentioned, even if those don’t really answer the question
* Mediocre language abilities, even in Chinese, for now
* Mediocre reasoning abilities, for now
* Hilarious image drawing results
Article
* Seems to copy-paste hard-coded official sources whenever certain topics are mentioned, even if those don’t really answer the question
* Mediocre language abilities, even in Chinese, for now
* Mediocre reasoning abilities, for now
* Hilarious image drawing results
Article
👍9😁6❤1
Stanford DSPy: The framework for programming with foundation models
“DSPy introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Specifically, the DSPy compiler will internally trace your program and then craft high-quality prompts for large LMs (or train automatic finetunes for small LMs) to teach them the steps of your task.”
Github
“DSPy introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Specifically, the DSPy compiler will internally trace your program and then craft high-quality prompts for large LMs (or train automatic finetunes for small LMs) to teach them the steps of your task.”
Github
👍7❤4🔥3