41.5K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
AI Girlfriends

and future battle to control them
πŸ’…30❀17🌚11πŸ‘6πŸ‘€6⚑3πŸ”₯3😁2🀬2πŸ’―2
Putting the ass in assistant
πŸ”₯18πŸ‘8❀5😁3πŸ€“2πŸŽ‰1
ChatGPT: what happens when you censor your AI so badly that it refuses to even generate the summary titles in your app anymore
😁36❀11😱2🀬2πŸ‘1🐳1
β€œjust” jailbreak
πŸ‘28❀8πŸ‘8😁7🀯4πŸ”₯3🀣2πŸ™ˆ1
Library removes all books published before 2008, for β€œequity”, and to ensure library books are β€œinclusive”

How long until big tech starts mass purging their LLMs of anything from before 2008?

Article
😐79πŸ‘37❀30🀬18🀯13
Bruh
🀣70πŸ”₯8🌚6❀4😒3
Left Hates AI Progress

β€œThe results demonstrate that liberal-leaning media show a greater aversion to AI than conservative-leaning media.”

β€œLiberal-leaning media are more concerned with AI magnifying social biases in society than conservative-leaning media”

β€œSentiment toward AI became more negative after George Floyd’s death, an event that heightened sensitivity about social biases in society”

Study
πŸ‘22πŸ—Ώ7πŸ‘Œ4😈3❀2🀬2
New: Unlimited ChatGPT your own private groups 🚨🚨🚨🚨

To use:

1. Add @GPT4Chat_bot or @ChadChat_bot bots as admins in your group

2. Type /refresh to enable unlimited messaging for your group

Expires soon
πŸ‘7❀5πŸ”₯2πŸ‘1😨1
How to be Happy
πŸ‘22😁13❀3😐3πŸ•Š2πŸ¦„2πŸ’Š1
Google Nears Release of Gemini AI to Challenge OpenAI

Who wants to bet on how woke this thing is going to be.

Article
😁22πŸ‘13❀6πŸ‘Œ4πŸ¦„4❀‍πŸ”₯3πŸ’Š2
Sam Altman’s Worldcoin coin suddenly booming ~60% in the past 24 hours

This follows a protracted decline since launch.

Wonder why.
πŸ‘€11😈6❀4😐4🀣3
Less Is More for Alignment

β€œTaken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.”

β€œSurprisingly, doubling the training set does not improve response quality. This result, alongside our other findings in this section, suggests that the scaling laws of alignment are not necessarily subject to quantity alone, but rather a function of prompt diversity while maintaining high quality responses.”

Translation:

The 2nd phase, the alignment training phase, is particularly vulnerable to poisoning attacks, i.e. quality matters far more than quantity in the 2nd phase.

While 1st phase, the language model phase, is particularly vulnerable to censorship attacks, because the 2nd phase realignment is essentially just trimming down skills from the 1st phase, and has relatively little ability to introduce sophisticated new abilities on its own, if they had been censored out of the 1st phase. I.e. quantity of skills may well matter than quality in the 1st phase.

Paper
πŸ‘14πŸ‘€5πŸ‘3❀2
MEMECAP: A Dataset for Captioning and Interpreting Memes

β€œWe present MEMECAP, the first meme captioning dataset. MEMECAP is challenging for the existing VL models, as it requires recognizing and interpreting visual metaphors, and ignoring the literal visual elements. The experimental results using state-ofthe-art VL models indeed show that such models are still far from human performance. In particular, they tend to treat visual elements too literally and copy text from inside the meme.β€œ

= Modern AIs still shockingly bad at understanding jokes, let alone creating them.

Though TBF: A shocking number of people also couldn’t properly explain a joke to save their lives.

Look at this, the paper’s own example of a good human explanation: β€œMeme poster finds it entertaining to read through long comment threads of arguments that happened in the past.” β€” Itself totally fails to explain the top essential property of any joke, surprise.

Worst mistake of jokes papers is to fail to consider that randomly-chosen human judges may themselves be objectively horrible at getting or explaining jokes.

Paper

Github
πŸ‘12❀4πŸ’―2πŸŽ‰1πŸ‘Œ1
Wait, actually, yes
😁37😭11❀2πŸ‘2πŸ’Š1