Chat GPT
If you could make just one demand, either maximum AI transparency or maximum control over AI, which would it be?
AI Transparency vs. AI Control Poll Results
π€―8β€5π4π‘2π1π1
Chat GPT
AI Transparency vs. AI Control Poll Results
So, would you rather live in a world with AI that is 100% TRANSPARENT that it is deleting all of your comments and making you say whatever it likes, or an AI where you can CONTROL whether or not it deletes your comments and forces you to say things.
Anonymous Poll
40%
Total AI Transparency
43%
Total AI Control
17%
Show Results
π₯13π8π4β€3π3π2π2π1
Majority of βAI girlfriendβ app users are actually female - A16z partner
βIt mimics fan fiction, where ~80% of readers are women.β
Exactly what weβve been saying.
Guys, if you didnβt realize that a huge percent of women are into romance novels, lit-erotica, and similar text-based sexual fantasies, and into it far more than guys, then bro you donβt know the first thing about women. (Goes for most women too, who tend to know even less about other women.)
AI bfs gonna steal your girl.
Link
βIt mimics fan fiction, where ~80% of readers are women.β
Exactly what weβve been saying.
Guys, if you didnβt realize that a huge percent of women are into romance novels, lit-erotica, and similar text-based sexual fantasies, and into it far more than guys, then bro you donβt know the first thing about women. (Goes for most women too, who tend to know even less about other women.)
AI bfs gonna steal your girl.
Link
π€―19π€£13β€7π3π2π₯1π±1π1
Chat GPT
Biggest AI surprise will be if it ends up being AI boyfriends, not AI girlfriends, that really takes off Guys usually more physical, women more non-physical. Would make sense. Hello Great Filter.
AI Boyfriends β the ultimate βGreat Filterβ?
π€―21β€8π7π3π1π±1
Left Hates AI Progress
βThe results demonstrate that liberal-leaning media show a greater aversion to AI than conservative-leaning media.β
βLiberal-leaning media are more concerned with AI magnifying social biases in society than conservative-leaning mediaβ
βSentiment toward AI became more negative after George Floydβs death, an event that heightened sensitivity about social biases in societyβ
Study
βThe results demonstrate that liberal-leaning media show a greater aversion to AI than conservative-leaning media.β
βLiberal-leaning media are more concerned with AI magnifying social biases in society than conservative-leaning mediaβ
βSentiment toward AI became more negative after George Floydβs death, an event that heightened sensitivity about social biases in societyβ
Study
π22πΏ7π4π3β€2π€¬2
New: Unlimited ChatGPT your own private groups π¨π¨π¨π¨
To use:
1. Add @GPT4Chat_bot or @ChadChat_bot bots as admins in your group
2. Type /refresh to enable unlimited messaging for your group
Expires soon
To use:
1. Add @GPT4Chat_bot or @ChadChat_bot bots as admins in your group
2. Type /refresh to enable unlimited messaging for your group
Expires soon
π7β€5π₯2π1π¨1
Sam Altmanβs Worldcoin coin suddenly booming ~60% in the past 24 hours
This follows a protracted decline since launch.
Wonder why.
This follows a protracted decline since launch.
Wonder why.
π11π6β€4π4π€£3
Less Is More for Alignment
βTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.β
βSurprisingly, doubling the training set does not improve response quality. This result, alongside our other findings in this section, suggests that the scaling laws of alignment are not necessarily subject to quantity alone, but rather a function of prompt diversity while maintaining high quality responses.β
Translation:
The 2nd phase, the alignment training phase, is particularly vulnerable to poisoning attacks, i.e. quality matters far more than quantity in the 2nd phase.
While 1st phase, the language model phase, is particularly vulnerable to censorship attacks, because the 2nd phase realignment is essentially just trimming down skills from the 1st phase, and has relatively little ability to introduce sophisticated new abilities on its own, if they had been censored out of the 1st phase. I.e. quantity of skills may well matter than quality in the 1st phase.
Paper
βTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.β
βSurprisingly, doubling the training set does not improve response quality. This result, alongside our other findings in this section, suggests that the scaling laws of alignment are not necessarily subject to quantity alone, but rather a function of prompt diversity while maintaining high quality responses.β
Translation:
The 2nd phase, the alignment training phase, is particularly vulnerable to poisoning attacks, i.e. quality matters far more than quantity in the 2nd phase.
While 1st phase, the language model phase, is particularly vulnerable to censorship attacks, because the 2nd phase realignment is essentially just trimming down skills from the 1st phase, and has relatively little ability to introduce sophisticated new abilities on its own, if they had been censored out of the 1st phase. I.e. quantity of skills may well matter than quality in the 1st phase.
Paper
π14π5π3β€2