Verbal analogies edition, and then feeding the question to GPT-4 in a different session, and then feeding the answers back into the first.
Of course GPT-4 rates itself as brilliant.
Ask me 5 different multiple-choice verbal analogy questions and analyze how smart you think I am according to my answers. Ask all the questions in one go and I will reply.
Of course GPT-4 rates itself as brilliant.
Ask me 5 different multiple-choice verbal analogy questions and analyze how smart you think I am according to my answers. Ask all the questions in one go and I will reply.
π2π2β€1π1
Verbal Analogies Test: GPT-3.5 edition.
GPT-4 makes questions,
GPT-3.5 answers questions,
GPT-4 grades them.
Result:
GPT-4 says GPT-3.5 is a retard.
GPT-4 makes questions,
GPT-3.5 answers questions,
GPT-4 grades them.
Result:
GPT-4 says GPT-3.5 is a retard.
π20π5β€2
Airstrike-Eliezer Yudkowsky Envisions Autocratic Empire
π±7π€¬6π«‘2β€1π΄1π¨βπ»1
Times Magazine: Eliezer Yudkowsky, chief proponent of AI-Slowdown camp, calls for Shutting down AI, tracking of all AI hardware, elimination of all competition, giving government absolute control
βShut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.β
βFrame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if thatβs what it takes to reduce the risk of large AI training runs.β
βThatβs the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now thereβs a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying βmaybe we should notβ deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do whatβs politically easy, that means their own kids are going to die too.β
βShut it all down.β
βShut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.β
βFrame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if thatβs what it takes to reduce the risk of large AI training runs.β
βThatβs the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now thereβs a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying βmaybe we should notβ deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do whatβs politically easy, that means their own kids are going to die too.β
βShut it all down.β
π€¬26π9β€5π3π€―3π2
Must βbe willing to destroy a rogue datacenter by airstrikeβ to enforce immediate AI shutdown - Eliezer Yudkowsky
Times Article
Times Article
π€¬19π―7π±4π€―2β€1
Pause Giant AI Experiments: An Open Letter
AI slowdown open letter signed by Musk, Woznial, 1300+ others.
AI slowdown open letter signed by Musk, Woznial, 1300+ others.
π€‘9π±7β€6π€£3π2π₯2π€1πΏ1
AI Pause Open Letter signed by Musk: βSociety has hit pause on other technologies with potentially catastrophic effects on society, [e.g.] gain-of-function research.β
Relax guys, the AI Pause Open letter says an AI pause will turn out just as great as the gain-of-function research pause did.
AI Pause Open Letter
Gain of Function Pause
Relax guys, the AI Pause Open letter says an AI pause will turn out just as great as the gain-of-function research pause did.
AI Pause Open Letter
Gain of Function Pause
π11π€¬4π±3β€1π1π―1