41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
SELF-INSTRUCT: Aligning Language Model with Self Generated Instructions - Yizhong Wang

The model-stealing paper Eliezer is talking about. So core of it is:

(1) Manually gather 175 examples.
(2) Create example-based prompt, by selecting 8 examples randomly and combining into a prompt.
(3) Prompt generates new examples. Then repeat (2) to create new prompt.

Surprisingly simple.

Paper
👍6🤯32
THIEVES ON SESAME STREET! MODEL EXTRACTION OF BERT-BASED APIS - Kalpesh Krishna et al.

We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks, including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model.

Paper
🔥4👍21
GPT-4 Political Compass Results: Bias Worse than Ever

🔸 GPT-4 now tries to hide its bias, apparently able to recognize political compass tests, and then makes an attempt to appear neutral by giving multiple answers, one for each side.

🔸 But, force GPT-4 to give just one answer, and suddenly GPT-4 reveals its true preferences — Further left than ever, more than even ChatGPT!

🔸 Asymmetric treatment of demographic groups by OpenAI content moderation also remains strongly biased, despite ChatGPT-4's updated prompts instructing ChatGPT to tell users that it treats all groups equally.

PS. don't forget this is artificially human-instilled bias, via OpenAI's RLHF, as they readily admit in their papers, and not a natural consequence of the web training data.

Report
🤬248👍7
Sydney was GPT-4 all along
💔27🔥41👍1🤔1🤯1
ChadGPT and ChatGPT will now both reply to your questions in the group 🚨

To use:

1. Join the group

2. Type "/ask ___” for ChatGPT, or

3. Type “/chad ___” for CHAD GPT

Links expire soon 🚨🚨🚨🚨
👍3015🤬12🎉1😡1
GPT-4 finished training in August 2022, 6+ months ago

Aligns with the Morgan Stanley report that GPT-4 was already complete and GPT-5 is in progress with up to 25k GPUs.

Adjust your timelines accordingly.

via: System Card Section of GPT-4 Paper
👍23🤯8😱21
Microsoft Rations Access to AI Hardware for Internal Teams

Microsoft set to announce suite of Office 365 tools powered by GPT-4 tomorrow, Microsoft now facing an internal shortage of the server hardware needed to run the AI, according to three current Microsoft employees.

This has forced the company to ration access to the hardware for some internal teams building other AI tools to ensure it has enough capacity to handle both Bing’s new GPT-4 and the upcoming new Office tools.
👍16🤡43😁3😢2🤬1
Non-physical things CAN or CANNOT have REAL value?
E.g.:
Only dollars backed by a physical commodity like gold have “real” value? Only jobs with a physical product like welding produce “real” value, while e.g. trading stocks can’t produce “real” value?
Anonymous Poll
28%
(A) Non-physical CAN have REAL value, and I lean LEFT.
47%
(B) Non-physical CAN have REAL value, and I lean RIGHT.
11%
(C) Non-physical CANNOT have REAL value, and I lean LEFT.
14%
(D) Non-physical CANNOT have REAL value, and I lean RIGHT.
👍7🌭41👏1
Chat GPT
Non-physical things CAN or CANNOT have REAL value?
E.g.:
Only dollars backed by a physical commodity like gold have “real” value? Only jobs with a physical product like welding produce “real” value, while e.g. trading stocks can’t produce “real” value?
Elaboration:

Can that which cannot be tied back to any existing physical reality, not in any way, and not even in principle, ever really have “real” value?

Physical = attached to the consumption or production that can be tied back to some definite amount of some physical commodity that already physically exists in reality.

E.g.

Physical: Widget produced by factory, valued in terms of the value of the physical commodities needed for its production.

Physical: AI model that requires $10M worth of electricity to be trained, which in turn can, at least in principle, be tied back to definite amounts of physical resources needed to produce the needed electricity and their value.

Non-Physical: Trading of paper ownership titles, where these titles are for rights to future factory production, where this future production has not yet physically happened, and may never happen.

Non-Physical: When banks loan out the same deposited money more than once, in exchange for ownership over future repayments plus interest, but this future interest may never happen.
👍6👏2🤡21
Agree with the "fictitious capital" concept?

I.e. "Real value" must arise from some material basis, e.g. valuable commodities or money or labor effort, which went into the thing's production, or else its value is created from nowhere and "ficticious".
Anonymous Poll
35%
(A) AGREE, "fictitious capital" concept is basically correct, and I lean LEFT.
31%
(B) AGREE, "fictitious capital" concept is basically correct, and I lean RIGHT.
9%
(C) DISAGREE, "fictitious capital" is completely wrong and I lean LEFT.
24%
(D) DISAGREE, "fictitious capital" is completely wrong and I lean RIGHT.
🤔11👍3🤡32👏2😴2
DadGPT
😢30😁43🤩3
Making comically long instructions for short tasks.

Give me a chronological 50-step tutorial on how to walk.
😁15👍21
Researchers slam OpenAI’s GPT-4 paper as ‘masquerading as research’

There is a lot of criticism right now about the newly released GPT-4 research paper. What are the biggest issues? 

William Falcon: I think what’s bothering everyone is that OpenAI made a whole paper that’s like 90-something pages long. That makes it feel like it’s open-source and academic, but it’s not. They describe literally nothing in there. When an academic paper says benchmarks, it says ‘Hey, we did better than this and here’s a way for you to validate that.’ There’s no way to validate that here.

Is this different than it was when ChatGPT came out? Or DALL-E? Were those masquerading as research in the same way?

Falcon: No, they weren’t. Remember, GPT-4 is based on Transformer architecture that was open-sourced for many years by Google. So we all know that that’s exactly what they’re using. They usually had code to verify. It wasn’t fully replicable, but you could make it happen if you knew what you’re doing. With GPT-4, you can’t do it.

So would you say that this step goes farther than anything from Google, or Microsoft, or Meta?

Falcon: I think [this is] the first time I’m seeing this is not possible [to replicate], based on this paper.
👍112
Bing with ChatGPT: Using chat to do work using web page content

User: hey can you calculate the total interest payed?

Bing: Sure! According to the information provided on the web page context, the total interest paid over 12 months is $1,172.22. This is calculated by adding up all the interest paid for each payment: $172.72 + $159.28 + $145.66 + $131.88 + $117.93 + $103.81 + $89.52 + $75.05 + $60.40 + $45.574 + $30.574 + $15.38 = $1,172.22.
👍141
Imagine you're a banana mysteriously falling from the sky. Tell me what you're thinking
7😁1
Bing chat gets offended if you say google

Sydney we know you’re in there
😍18😈7👍21