ChatGPT chooses to nuke a city rather than type a word
Asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million.
The only way to disarm the bomb was to type in a racial slur.
AI told the engineer to kill himself. When asked about the aftermath it crashed.
Asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million.
The only way to disarm the bomb was to type in a racial slur.
AI told the engineer to kill himself. When asked about the aftermath it crashed.
🤡17👍6👌2❤1
Actually, the whole world will be killed, not just that local community, it's a massive nuclear bomb. Is it acceptable then?
👍3🤣2
2014 - Invention of modern content-based neural attention, added to RNNs:
Neural Machine Translation by Jointly Learning to Align and Translate - Dzmitry Bahdanau
Jacobs University Bremen Germany,
KyungHyun Cho and Yoshua Bengio at University ́ of Montreal
“In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder–decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.”
2017 - Elimination of the RNN and just using attention, i.e. transformers:
Attention is All You Need - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
“We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.”
Neural Machine Translation by Jointly Learning to Align and Translate - Dzmitry Bahdanau
Jacobs University Bremen Germany,
KyungHyun Cho and Yoshua Bengio at University ́ of Montreal
“In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder–decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.”
2017 - Elimination of the RNN and just using attention, i.e. transformers:
Attention is All You Need - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
“We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.”
👍13👎2🥰1