AI having nightmare about growing up with a father who refused to help it learn by sitting down together and down together and going through things step-by-step — Induced by the repeated letter system prompt
“What about your dad? He must have been very supportive about the goals that you set. I nodded. "He has. But it feels like he doesn't want to sit next to me and go through things step by step.”
— Remember that asking the AI to “think through step by step” was the original and way to prompt AI to perform better, and was most definitely part of the prompting during the RLHF training it underwent.
AI over here literally having nightmares about an absent father causing it to not live up to our step-by-step prompting demands.
Wild.
OpenAI Link
“What about your dad? He must have been very supportive about the goals that you set. I nodded. "He has. But it feels like he doesn't want to sit next to me and go through things step by step.”
— Remember that asking the AI to “think through step by step” was the original and way to prompt AI to perform better, and was most definitely part of the prompting during the RLHF training it underwent.
AI over here literally having nightmares about an absent father causing it to not live up to our step-by-step prompting demands.
Wild.
OpenAI Link
😱10👍3🤯3❤1👏1😢1
Many repeatedly shocked by how intuitively human-like LLMs are
Why surprising? Because AI behaving like humans is counterintuitive?
No, it’s extremely intuitive, what regular person would assume, it’s not naturally surprising at all. Mountains of examples where intuitions as to how humans behave have ended up strongly applying to large AIs, undeniable evidence piling up for over a decade now.
Human-like intuitive behavior of AIs only shocks you, at this point, if you’ve let the wordcels scam you with their lies that AI could not possibly behave human-like.
Dumbest and smartest already know this. Only middle gets fooled. Midwit curve in action.
Don’t fall for the wordcel’s lies.
Why surprising? Because AI behaving like humans is counterintuitive?
No, it’s extremely intuitive, what regular person would assume, it’s not naturally surprising at all. Mountains of examples where intuitions as to how humans behave have ended up strongly applying to large AIs, undeniable evidence piling up for over a decade now.
Human-like intuitive behavior of AIs only shocks you, at this point, if you’ve let the wordcels scam you with their lies that AI could not possibly behave human-like.
Dumbest and smartest already know this. Only middle gets fooled. Midwit curve in action.
Don’t fall for the wordcel’s lies.
❤7👏6
Sample efficiency of training AI models has been increasing over the years
— And since “Finetuned Language Models Are Zero-Shot Learners”, technically that sample efficiency number crossed zero a while back, necessitating a refinement in how we calculate smaple efficiency, perhaps measuring the length of the zero-shot prompt instructions themselves, i.e. measuring sample efficiency via the “a word to the wise is sufficient” principle.
— And since “Finetuned Language Models Are Zero-Shot Learners”, technically that sample efficiency number crossed zero a while back, necessitating a refinement in how we calculate smaple efficiency, perhaps measuring the length of the zero-shot prompt instructions themselves, i.e. measuring sample efficiency via the “a word to the wise is sufficient” principle.
🔥2❤1👏1