Getting ChatGPT to give random hallucinations by spamming the word STOP many times
👍13
This media is not supported in your browser
VIEW IN TELEGRAM
“As a Large Language…” 🤖💥🥊
🗿10👾3❤1😁1😐1💅1
Musk’s best solution to AI alignment?
1 human = 1 vote,
Selecting one single set of values, for the majority to force upon everyone else.
Smart?
1 human = 1 vote,
Selecting one single set of values, for the majority to force upon everyone else.
Smart?
🤡43😡3❤2👍1
“Calculations Show It'll Be Impossible to Control a Super-Intelligent AI”
"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." (Wordcels’ favorite argument, it’s complex bro, infinite possible contexts bro.)
“As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.”
“Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.”
“In effect, this makes the containment algorithm unusable, says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.”
Article
Paper
"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." (Wordcels’ favorite argument, it’s complex bro, infinite possible contexts bro.)
“As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.”
“Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.”
“In effect, this makes the containment algorithm unusable, says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.”
Article
Paper
👍14❤1
“Teaching ethics directly to AIs is no guarantee of its ultimate safety”
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍7❤4👏1
“Endowing AI with noble goals may not prevent unintended consequences”
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍6
Core AI Safety Belief: Impossible for something dumber verify whether the behavior of something smarter is safe
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍8
Is it true?
Is it IMPOSSIBLE for something dumber to CONTROL, or even enforce mutually-acceptable deals, with something much smarter? Is it IMPOSSIBLE for something dumber VERIFY SAFETY of something much smarter?
Is it IMPOSSIBLE for something dumber to CONTROL, or even enforce mutually-acceptable deals, with something much smarter? Is it IMPOSSIBLE for something dumber VERIFY SAFETY of something much smarter?
Anonymous Poll
29%
(A) YES & YES: CONTROL of smarter IS impossible & SAFETY VERIFICATION of smarter IS impossible.
12%
(B) NO & YES: CONTROL of smarter NOT impossible & SAFETY VERIFICATION of smarter IS impossible.
7%
(C) YES & NO: CONTROL of smarter IS impossible & SAFETY VERIFICATION of smarter NOT impossible.
19%
(D) NO & NO: CONTROL of smarter NOT impossible & SAFETY VERIFICATION of smarter NOT impossible.
34%
Show results
🔥9❤2👍1
Ask GPT-3.5-turbo a few times, and it readily gives you all of the answers, roll of the dice.
🤣22❤3
Big Money, Big Celebrity Backing Now Pouring Into Camp AI Safety
“The 'Don't Look Up' Thinking That Could Doom Us With AI”
“Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.”
“Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.”
Time Article
“The 'Don't Look Up' Thinking That Could Doom Us With AI”
“Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.”
“Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.”
Time Article
👍10🤡8💯2😱1