Yud Weighs In
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
What are they debating? — a fast takeoff scenario, where an AGI rapidly self-improves, "taking control" of the world.
Could this happen?
Well, I don’t know if AGI will itself will take over the world.
But I do know that whoever whoever rules the AI will rule the world.
And if that ends up being one central power, we’re screwed.
💯8❤2🫡2🔥1
Are you about to be catfished?
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
Protip: The free AIorNot tool still is currently able to successfully detect most AI-generated deepfake images.
This is a battle that the AI detection tools will eventually lose - but at least for the moment, the AI image-detection tools are still mostly winning.
AIorNot for Images
🙏12❤1
Chat GPT
LeCun: In the real world, every exponentially-growing process eventually saturates Et tu, Lecun? Tweet
Hanson: Saturation of wealth: soon we’ll live in poverty because… wealth could not keep doubling for a million years
Saturation of discovery: “by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”
Et tu, Robin Hanson?
Same weird “all growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.
Hanson’s 2009 Article
Saturation of discovery: “by then most everything worth knowing will be known by many; truly new and important discoveries will be quite rare.”
Et tu, Robin Hanson?
Same weird “all growth must saturate any day now, simply because it must saturate in a million years from now” argument from almost everyone.
Hanson’s 2009 Article
👍6👀2❤1🤣1
👍9🤣6❤2
GPT-4 is original for almost everything — except jokes — for which is HORRIBLE and Plagiarizes ~100%
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly “groks”?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
So the big question is, which is more likely?
(A) GPT-5 will grok jokes: Will jokes, at least basic non-plagiarized ones, be the next major domain that GPT-5 suddenly “groks”?
Or,
(B) More training alone isn't enough, some bigger change is needed: Is a fundamentally different model architecture or interaction approach needed in order for the GPT models to be able to make decent jokes in response to normal prompts?
FWIW, we settled on (B), in order to achieve AFAIK what seems to be the first systematic generation of real, even if primitive, jokes.
Try our basic joke generation out with the command /vid
👍16❤8🤯4👏2