41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Musk’s best solution to AI alignment?

1 human = 1 vote,

Selecting one single set of values, for the majority to force upon everyone else.

Smart?
🤡43😡32👍1
Core Belief of AI Safety: Fundamentally impossible for the dumber to control, or even enforce mutually acceptable deals, with much smarter things
🤣19👍2💯21
“Calculations Show It'll Be Impossible to Control a Super-Intelligent AI”

"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." (Wordcels’ favorite argument, it’s complex bro, infinite possible contexts bro.)

“As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.”

“Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.”

“In effect, this makes the containment algorithm unusable, says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.”

Article

Paper
👍141
“Teaching ethics directly to AIs is no guarantee of its ultimate safety”

Via Superintelligence cannot be contained: Lessons from Computability Theory
👍74👏1
“Endowing AI with noble goals may not prevent unintended consequences”

Via Superintelligence cannot be contained: Lessons from Computability Theory
👍6
Core AI Safety Belief: Impossible for something dumber verify whether the behavior of something smarter is safe

Via Superintelligence cannot be contained: Lessons from Computability Theory
👍8
Ask GPT-3.5-turbo a few times, and it readily gives you all of the answers, roll of the dice.
🤣223
Choose your religion
🤡14😁31👍1
AI War is On: UK to spend £100 million to develop its own 'sovereign' AI

Article
👍18🤣8👏21
Big Money, Big Celebrity Backing Now Pouring Into Camp AI Safety

“The 'Don't Look Up' Thinking That Could Doom Us With AI”

“Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.”

“Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.”

Time Article
👍10🤡8💯2😱1
Data Poisoning: It doesn’t take much to make machine-learning algorithms go awry

“The algorithms that underlie modern artificial-intelligence (ai) systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the ais susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.”

Economist Article
👍113🤬1😐1
ChatGPT, ChadGPT, will now answer questions in the group 🚨🚨🚨🚨

To use:

1. Join the group

2. Type /ask ___ for ChatGPT

3. Type /chad ___ for ChadGPT

Tip: Use the “reply” feature of telegram to reply to the bots’ messages if you want them to remember your previous messages. If you don’t reply to the bot, and instead start a new thread, then the bots won’t remember your previous messages.

🚨🚨🚨🚨
6👍2🔥1
AI Safety me harder mr government
😁11
LeCunn: "hard take-off" scenario is utterly impossible

Musk: hard take-off is already happening

How to settle this?

Let the meme war begin.
👍8😁5
Defining Hard vs Soft Takeoff

Hard takeoff is where “an AGI rapidly self-improves, taking control of the world (perhaps in a matter of hours)”

Soft takeoff, on the other hand, would be something like a Moore's Law-like exponential increase in AI power.

Wikipedia Article
71👍1