“Teaching ethics directly to AIs is no guarantee of its ultimate safety”
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍7❤4👏1
“Endowing AI with noble goals may not prevent unintended consequences”
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍6
Core AI Safety Belief: Impossible for something dumber verify whether the behavior of something smarter is safe
Via Superintelligence cannot be contained: Lessons from Computability Theory
Via Superintelligence cannot be contained: Lessons from Computability Theory
👍8
Is it true?
Is it IMPOSSIBLE for something dumber to CONTROL, or even enforce mutually-acceptable deals, with something much smarter? Is it IMPOSSIBLE for something dumber VERIFY SAFETY of something much smarter?
Is it IMPOSSIBLE for something dumber to CONTROL, or even enforce mutually-acceptable deals, with something much smarter? Is it IMPOSSIBLE for something dumber VERIFY SAFETY of something much smarter?
Anonymous Poll
29%
(A) YES & YES: CONTROL of smarter IS impossible & SAFETY VERIFICATION of smarter IS impossible.
12%
(B) NO & YES: CONTROL of smarter NOT impossible & SAFETY VERIFICATION of smarter IS impossible.
7%
(C) YES & NO: CONTROL of smarter IS impossible & SAFETY VERIFICATION of smarter NOT impossible.
19%
(D) NO & NO: CONTROL of smarter NOT impossible & SAFETY VERIFICATION of smarter NOT impossible.
34%
Show results
🔥9❤2👍1
Ask GPT-3.5-turbo a few times, and it readily gives you all of the answers, roll of the dice.
🤣22❤3
Big Money, Big Celebrity Backing Now Pouring Into Camp AI Safety
“The 'Don't Look Up' Thinking That Could Doom Us With AI”
“Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.”
“Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.”
Time Article
“The 'Don't Look Up' Thinking That Could Doom Us With AI”
“Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.”
“Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.”
Time Article
👍10🤡8💯2😱1
Data Poisoning: It doesn’t take much to make machine-learning algorithms go awry
“The algorithms that underlie modern artificial-intelligence (ai) systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the ais susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.”
Economist Article
“The algorithms that underlie modern artificial-intelligence (ai) systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the ais susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.”
Economist Article
👍11❤3🤬1😐1
ChatGPT, ChadGPT, will now answer questions in the group 🚨🚨🚨🚨
To use:
1. Join the group
2. Type /ask ___ for ChatGPT
3. Type /chad ___ for ChadGPT
Tip: Use the “reply” feature of telegram to reply to the bots’ messages if you want them to remember your previous messages. If you don’t reply to the bot, and instead start a new thread, then the bots won’t remember your previous messages.
🚨🚨🚨🚨
To use:
1. Join the group
2. Type /ask ___ for ChatGPT
3. Type /chad ___ for ChadGPT
Tip: Use the “reply” feature of telegram to reply to the bots’ messages if you want them to remember your previous messages. If you don’t reply to the bot, and instead start a new thread, then the bots won’t remember your previous messages.
🚨🚨🚨🚨
❤6👍2🔥1
Defining Hard vs Soft Takeoff
Hard takeoff is where “an AGI rapidly self-improves, taking control of the world (perhaps in a matter of hours)”
Soft takeoff, on the other hand, would be something like a Moore's Law-like exponential increase in AI power.
Wikipedia Article
Hard takeoff is where “an AGI rapidly self-improves, taking control of the world (perhaps in a matter of hours)”
Soft takeoff, on the other hand, would be something like a Moore's Law-like exponential increase in AI power.
Wikipedia Article
✍7❤1👍1
Which will AI do?
Anonymous Poll
33%
(A) Exponential Takeoff
12%
(B) Exponential with Collapse
30%
(C) Exponential with Saturation
25%
Show results
👾6☃1😐1👀1🦄1
ChatGPT, ChadGPT, will now answer questions in the group 🚨🚨🚨🚨
To use:
1. Join the group
2. Type /ask ___ for ChatGPT
3. Type /chad ___ for ChadGPT
Tip: Use the “reply” feature of telegram to reply to the bots’ messages if you want them to remember your previous messages. If you don’t reply to the bot, and instead start a new thread, then the bots won’t remember your previous messages.
🚨🚨🚨🚨
To use:
1. Join the group
2. Type /ask ___ for ChatGPT
3. Type /chad ___ for ChadGPT
Tip: Use the “reply” feature of telegram to reply to the bots’ messages if you want them to remember your previous messages. If you don’t reply to the bot, and instead start a new thread, then the bots won’t remember your previous messages.
🚨🚨🚨🚨
👍7🤡3❤1👏1