This media is not supported in your browser
VIEW IN TELEGRAM
A demo of the attention mechanism of DeepMind's AlphaCode as it completes a coding question.
Each colored piece of text shows what the model is looking at when it makes its prediction for the next code to write. The opacity of the colored background represents how much weight that piece of text gets overall.
Each colored piece of text shows what the model is looking at when it makes its prediction for the next code to write. The opacity of the colored background represents how much weight that piece of text gets overall.
Funny thing — Max is doing exactly the kind of over-confident-toned lying that Andrew talks about the AIs doing.
Already confirmed that LLM’s confidences are surprisingly well-calibrated, in not all, but a wide set of conditions.
Solve the LLM lying problem, and you solve the human lying problem. They are one and the same.
Already confirmed that LLM’s confidences are surprisingly well-calibrated, in not all, but a wide set of conditions.
Solve the LLM lying problem, and you solve the human lying problem. They are one and the same.