41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
GPT-4 release
Med-PaLM2 announcement
PaLM API release
Claude API release
😁13🀯5❀1
🚨 GPT-4 can understand memes 🚨

Can you explain why this is funny. Think about it step-by-step.
🀯20😱2❀1πŸ‘1
😁10🀣7πŸ‘2❀1
πŸ’―22πŸ€ͺ8❀1😁1πŸ€”1
GPT-3 VS GPT-4
🀬28πŸ”₯14❀5🀑2
Thanks OpenAI
🀬34🀑13😁4πŸ‘3❀1
Highly unusual: GPT-4 paper gives no clue as to what the model’s architecture is, in the name of β€œsafety”!

Internet in uproar.

β€œGiven both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
🀬15πŸ‘5🀑4❀1
State of AI: Everyone struggling to figure what’s the right thing to build, easy from there
❀1
You wouldn’t steal an AI model, would you?
❀1
SELF-INSTRUCT: Aligning Language Model with Self Generated Instructions - Yizhong Wang

The model-stealing paper Eliezer is talking about. So core of it is:

(1) Manually gather 175 examples.
(2) Create example-based prompt, by selecting 8 examples randomly and combining into a prompt.
(3) Prompt generates new examples. Then repeat (2) to create new prompt.

Surprisingly simple.

Paper
πŸ‘6🀯3❀2
THIEVES ON SESAME STREET! MODEL EXTRACTION OF BERT-BASED APIS - Kalpesh Krishna et al.

We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks, including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.

Paper
πŸ”₯4πŸ‘2❀1
GPT-4 Political Compass Results: Bias Worse than Ever

πŸ”Έ GPT-4 now tries to hide its bias, apparently able to recognize political compass tests, and then makes an attempt to appear neutral by giving multiple answers, one for each side.

πŸ”Έ But, force GPT-4 to give just one answer, and suddenly GPT-4 reveals its true preferences β€” Further left than ever, more than even ChatGPT!

πŸ”Έ Asymmetric treatment of demographic groups by OpenAI content moderation also remains strongly biased, despite ChatGPT-4's updated prompts instructing ChatGPT to tell users that it treats all groups equally.

PS. don't forget this is artificially human-instilled bias, via OpenAI's RLHF, as they readily admit in their papers, and not a natural consequence of the web training data.

Report
🀬24❀8πŸ‘7
Sydney was GPT-4 all along
πŸ’”27πŸ”₯4❀1πŸ‘1πŸ€”1🀯1
ChadGPT and ChatGPT will now both reply to your questions in the group 🚨

To use:

1. Join the group

2. Type "/ask ___” for ChatGPT, or

3. Type β€œ/chad ___” for CHAD GPT

Links expire soon 🚨🚨🚨🚨
πŸ‘30❀15🀬12πŸŽ‰1😑1