Forwarded from Chat GPT
First 3 words of article that coined Moore’s Law — “With unit cost falling”
Concluding sentence of the introduction - “Machines similar to those in existence today will be built at lower costs and with faster turnaround.”
Moore’s Law was always, first and foremost, about
i.e.
All else is journalist lies.
Gordon Moore’s 1965 Article Coining Moore’s Law
Concluding sentence of the introduction - “Machines similar to those in existence today will be built at lower costs and with faster turnaround.”
Moore’s Law was always, first and foremost, about
Min Cost / Compute
i.e.
Max Compute / Cost
All else is journalist lies.
Gordon Moore’s 1965 Article Coining Moore’s Law
👍6❤1
False Impossibilities: ChatGPT weighs in on unbounded growth in reality
ChatGPT, Is it impossible, even in principle, in the real world, for an exponentially-growing process to continue growing without bound and without saturating, for at least 1000 years? Answer yes or no.
If the answer is yes, that it is impossible, then state which principle of nature, specifically, is the principle that prevents this? Do not say vague concepts. Only say the formal name(s) for formal principles, if any.
If the answer is no, that exponential growth without saturation is in fact possible, then why is it that people might wrongly assume that exponential growth is impossible, even in principle?
Do not answer anything other than the questions asked here.
ChatGPT, Is it impossible, even in principle, in the real world, for an exponentially-growing process to continue growing without bound and without saturating, for at least 1000 years? Answer yes or no.
If the answer is yes, that it is impossible, then state which principle of nature, specifically, is the principle that prevents this? Do not say vague concepts. Only say the formal name(s) for formal principles, if any.
If the answer is no, that exponential growth without saturation is in fact possible, then why is it that people might wrongly assume that exponential growth is impossible, even in principle?
Do not answer anything other than the questions asked here.
💯6❤1👏1
False Impossibilities: 1969, Minsky published a book which strongly implied that neural networks as a whole were dead-end, implying it was “impossible” for neural networks to solve even the simple XOR problem.
So many morons believed these obviously-wrong false-impossibility “proofs”, that this is widely cited being the event that crashed the entire AI field, plunging it into decades of AI winter.
Same then, as now, morons continue to fall for false impossibility claims.
Article
So many morons believed these obviously-wrong false-impossibility “proofs”, that this is widely cited being the event that crashed the entire AI field, plunging it into decades of AI winter.
Same then, as now, morons continue to fall for false impossibility claims.
Article
👏5👍2❤1
Media is too big
VIEW IN TELEGRAM
False Impossibilities: Lighthill Debate, 1973 — Lighthill implied that “combinatoral explosion” problem makes AI effectively impossible, even in principle, and even with future advancements fo Moore's Law
Likewise, he claimed that methods bypassing these “combinatorial explosions”, what he called “heuristics”, would “depend critically
on data derived through the use of human intelligence” and therefore end up useless on most problems, due to the size of the combinatorial explosion.
This Lighthill Debate is widely seen as having collapsed funding for AI in the UK.
Problem?
His impossibility claims were lies, false impossibilities.
Was obviously wrong to anyone with a brain at the time, later solidly emperically proven wrong, on particular examples he cited, via e.g. MuZero.
False impossibility claims, wrecking progress in countless fields for centuries.
Could it happen again?
Already starting, right here.
Clarke: If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.
Beware the false impossibility proof, wrecker of fields.
Lighthill Report Transcript
Likewise, he claimed that methods bypassing these “combinatorial explosions”, what he called “heuristics”, would “depend critically
on data derived through the use of human intelligence” and therefore end up useless on most problems, due to the size of the combinatorial explosion.
This Lighthill Debate is widely seen as having collapsed funding for AI in the UK.
Problem?
His impossibility claims were lies, false impossibilities.
Was obviously wrong to anyone with a brain at the time, later solidly emperically proven wrong, on particular examples he cited, via e.g. MuZero.
False impossibility claims, wrecking progress in countless fields for centuries.
Could it happen again?
Already starting, right here.
Clarke: If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.
Beware the false impossibility proof, wrecker of fields.
Lighthill Report Transcript
👍8❤1👏1
False impossibilities: Guess what else the 1972 Lighthill Report mentioned ~6 times — as the the reason that AI would never be able to do the things… that now can do today?
Machines lacking “emotion.”
“Most robots are designed from the outset to operate in a world as like as possible to the conventional child s world as seen by a man they play games they do puzzles they build towers of bricks they recognise pictures in drawing books (“ bear on rug with ball” ) although the rich emotional character of the child s world is totally absent”
“This is partly because chess is a complic ated enough game so that in a contest between a computer and a human player the computer s advantages of being able to calculate reliably at a speed several orders of magnitude faster need by no means be decisive the number of possible positions being incomparably greater and so there is real interest in whether or not they are outweighed by the human player s pattern- recognition ability exibility of approach learning capacity and emotional drive to win.”
“It is a truism that human beings who are very strong intellectually but weak in emotional drives and emotional relationships
are singularly ine ffective in the world at large. Valuable results flow from the integration of intellectual ability with the capacity to feel and to relate to other people until this integra tion happens problem solving is no good because there is no way of seeing which are the right problems.”
Sadder part is how many people still fall for this nonsense.
Part I Articial Intelligence - A general survey by Sir James Lighthill
Machines lacking “emotion.”
“Most robots are designed from the outset to operate in a world as like as possible to the conventional child s world as seen by a man they play games they do puzzles they build towers of bricks they recognise pictures in drawing books (“ bear on rug with ball” ) although the rich emotional character of the child s world is totally absent”
“This is partly because chess is a complic ated enough game so that in a contest between a computer and a human player the computer s advantages of being able to calculate reliably at a speed several orders of magnitude faster need by no means be decisive the number of possible positions being incomparably greater and so there is real interest in whether or not they are outweighed by the human player s pattern- recognition ability exibility of approach learning capacity and emotional drive to win.”
“It is a truism that human beings who are very strong intellectually but weak in emotional drives and emotional relationships
are singularly ine ffective in the world at large. Valuable results flow from the integration of intellectual ability with the capacity to feel and to relate to other people until this integra tion happens problem solving is no good because there is no way of seeing which are the right problems.”
Sadder part is how many people still fall for this nonsense.
Part I Articial Intelligence - A general survey by Sir James Lighthill
😁3❤2👏1🤣1
Yud Weighs In
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
💯8❤2🫡2🔥1
Are you about to be catfished?
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
🙏12❤1