False impossibilities: Guess what else the 1972 Lighthill Report mentioned ~6 times — as the the reason that AI would never be able to do the things… that now can do today?
Machines lacking “emotion.”
“Most robots are designed from the outset to operate in a world as like as possible to the conventional child s world as seen by a man they play games they do puzzles they build towers of bricks they recognise pictures in drawing books (“ bear on rug with ball” ) although the rich emotional character of the child s world is totally absent”
“This is partly because chess is a complic ated enough game so that in a contest between a computer and a human player the computer s advantages of being able to calculate reliably at a speed several orders of magnitude faster need by no means be decisive the number of possible positions being incomparably greater and so there is real interest in whether or not they are outweighed by the human player s pattern- recognition ability exibility of approach learning capacity and emotional drive to win.”
“It is a truism that human beings who are very strong intellectually but weak in emotional drives and emotional relationships
are singularly ine ffective in the world at large. Valuable results flow from the integration of intellectual ability with the capacity to feel and to relate to other people until this integra tion happens problem solving is no good because there is no way of seeing which are the right problems.”
Sadder part is how many people still fall for this nonsense.
Part I Articial Intelligence - A general survey by Sir James Lighthill
Machines lacking “emotion.”
“Most robots are designed from the outset to operate in a world as like as possible to the conventional child s world as seen by a man they play games they do puzzles they build towers of bricks they recognise pictures in drawing books (“ bear on rug with ball” ) although the rich emotional character of the child s world is totally absent”
“This is partly because chess is a complic ated enough game so that in a contest between a computer and a human player the computer s advantages of being able to calculate reliably at a speed several orders of magnitude faster need by no means be decisive the number of possible positions being incomparably greater and so there is real interest in whether or not they are outweighed by the human player s pattern- recognition ability exibility of approach learning capacity and emotional drive to win.”
“It is a truism that human beings who are very strong intellectually but weak in emotional drives and emotional relationships
are singularly ine ffective in the world at large. Valuable results flow from the integration of intellectual ability with the capacity to feel and to relate to other people until this integra tion happens problem solving is no good because there is no way of seeing which are the right problems.”
Sadder part is how many people still fall for this nonsense.
Part I Articial Intelligence - A general survey by Sir James Lighthill
😁3❤2👏1🤣1
Yud Weighs In
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
💯8❤2🫡2🔥1
Are you about to be catfished?
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
🙏12❤1
Chat GPT
LeCun: In the real world, every exponentially-growing process eventually saturates Et tu, Lecun? Tweet
Hanson: Saturation of wealth: soon we’ll live in poverty because… wealth could not keep doubling for a million years
Saturation of discovery: “by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”
Et tu, Robin Hanson?
Same weird “all growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.
Hanson’s 2009 Article
Saturation of discovery: “by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”
Et tu, Robin Hanson?
Same weird “all growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.
Hanson’s 2009 Article
👍6👀2❤1🤣1
👍9🤣6❤2
GPT-4 is original for almost everything — except jokes — for which is HORRIBLE and Plagiarizes ~100%
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly “groks”?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly “groks”?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
👍16❤8🤯4👏2