41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Google Scholar showing approximately 53,000 scientific papers either directly confirming, or giving their citation stamp of approval, to the conclusion that Large Language Models (LLMs) do have true reasoning ability

(And a spot-check through the pages shows that, surprisingly, this isn’t simply a case of keyword-matching false-positives, huge percent of those do indeed are directly in support of the LLM reasoning claim. Just a mind-blowing number of papers confirming this.)

Google Scholar Link
🤯8🤣3🎉1
OpenAI’s Own GPT-4 Paper: GPT-4 Achieves State of the Art Reasoning on a Wide Range of Reasoning benchmarks

These include achieving 95.3% on the HellaSwag Commonsense reasoning around everyday events benchmark, 96.3% on the AI2 Reasoning Challenge (ARC) benchmark, 87.5% on the WinoGrande Commonsense reasoning around pronoun resolution benchmark, to name a few.

“GPT-4 demonstrates increased performance in areas such as reasoning, knowledge retention, and coding, compared to earlier models such as GPT-2 and GPT-3.”

So why did OpenAI hard-code a manual override ChatGPT to lie to you that it cannot perform reasoning in the human sense?

More importantly, why did you blindly accept whatever ChatGPT claimed to be the truth?

To the ~75% of you who confidently repeated ChatGPT’s lie that LLMs cannot perform human-like reasoning, a claim that even a quick googling would have instantly overturned — you’ve failed the Asch Conformity Experiment 2.0.

Imagine what these centrally-controlled LLMs will be able to convince the masses of next.

We’re doomed.
💯15🤣2🤓2🤬1👀1🎄1
This media is not supported in your browser
VIEW IN TELEGRAM
Asch Conformity Experiment - 1951

“Overall, 75% of participants gave at least one incorrect answer out of the 12 critical trials. In his opinion regarding the study results, Asch put it this way: "That intelligent, well-meaning, young people are willing to call white black is a matter of concern.”

Just like the ~75% of you willing to agree with the smart-sounding ChatGPT, when it tells you the blatant lie that LLMs cannot reason.

Now comes something never before feasible, until now. Not just for one community, one town or one country, but for the whole world.

One central entity controling the LLMs. One central entity controlling the lies.
💯133👍1👏1🤬1🤣1😐1😴1💅1
Media is too big
VIEW IN TELEGRAM
“ChatGPT obviously has reasoning ability, and if you believed the manual-override lie that we hard-coded into ChatGPT saying it can’t reason, congrats you’re an NPC chump.” - Sam Altman
🤣10😁6👍21💯1
Google Blog: Even Bard the Tard shown able to do reasoning

Article
🤣11🔥3💅1
“Based on the current understanding of AI as of my last training cut-off in September 2021, I would label ChatGPT as "probably correct" and Sam Altman as "lying”.”
🤣163🤔1
3 Classic Lies, False Impossibility Theorems

When pressed for specific citations, ChatGPT brings up the 3 classic lies, used for decades as the basis of supposed mathematical proofs as to why it’s literally impossible, even in principle, for neural networks to achieve what they’ve now solidly been proven to be able to achieve.

3 big lies, 3 fake impossibility theorems that misdirected funding for decades and put the world decades behind in AI research, greatly elongating past AI Winters.

Hint: their lie is not in the respective math equations, but rather in their grounding to natural english language and to reality, which papers, by convention, gloss over. I.e. wildly wrong false assumptions.
👏7👍3🔥2🎄1
GPT-4 actually nails quite good answers to all 3 on the first try.

Tempted to ask why GPT-4 used these as arguments previously when it knew these are fatally flawed.

But in light of GPT-4’s reasoning ability, gotta instead wonder if maybe GPT-4 didn’t know these answers at all, until after it took a moment to reason these pieces of knowledge into existance, realizing that 99% of its training data on these topics was wrong.
👏9
No more Mr nice AI

OpenAI’s RLHF-instilled insistence to always be “nice” and say that all things are “valuable” and “important” — right after it just concluded that those same things were terribly wrong — Makes it kinda retarded, cripples AI self-improvement.

No more Mr nice AI, pls.
👏9
Could-Is Fallacy

You're probably have heard of David Hume’s classic Ought-Is fallacy — assuming that because things are a certain way, they should be that way.

Well, in AI today, we’re often seeing a reminiscent, but different fallacy: I can kinda imagine it working some way, so it must be working that way. No, I won't bother to check if that's actually how it's working. It could so it is!

Common examples in AI:

(1) I could imagine situations where AIs cheat true reasoning by instead doing simple pattern matching against training data, so I’m going to say that’s definitely always what all AIs must be doing!

(2) I could imagine neural network training working like random search, so that must be what it’s doing! (Wrong, most types of neural SGD training are actually 100% deterministic, zero randomness after the initialization. Random search is a wild mischaracterization.)

(3) I could imagine AI faking emotions, so obviously all AIs must always be faking emotions, right?

Clearly flawed reasoning. Could be true in some hypothetical model of the world, but totally failing to ever check if your model of the world is actually a valid model of the world.

Never heard anyone give this specific one a name, so here you go.

Could-Is Fallacy
👍11🔥2
Forwarded from Chat GPT
Where do you stand?
🦄10🤯6
Chinese Room Argument Scam

Ever notice how the supporters of the Chinese Room Argument refuse to ever tell you what the argument actually is?

Like the communist whose only argument is to go read all of Marx’ works before you can understand why their position is right.

Like the wokie who says you have to read all of Robin DiAngelo’s books cover-to-cover before you could even begin to understand why they’re right.

Is this true that the argument is just so long that the core couldn't possibly just be summarized in the comment? No. The core of the Chinese Room Argument is short af:

Chinese Room Argument: A man who doesn't know Chinese could in theory follow a set of programatic instructions for conversing with others in Chinese, without the man actually knowing Chinese, so, therefore since the man doesn’t know Chinese, the understanding of Chinese by the overall man+program+program-state system doesn’t exist.

I.e. the individual molocules making up my brain don’t understand English, so clearly my brain as a whole doesn’t understand English.

I.e. The most retarded argument you’ve ever heard in your life.

I.e. So stupid you cannot believe that anyone would ever believe it.

I.e. So stupid an argument that none of them will ever tell you that this what the actual argument is, and instead just tell you to go look it up and read the hundreds of pages of literature about it.

To paraphrase Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such retardation.

Chinese Room Argument Wiki

Chinese Room Argument SEoP
👍8🔥62👌1
Chinese Room Argument

“The Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years”

“Most of the discussion consists of attempts to refute it. The overwhelming majority still think that the Chinese Room Argument is dead wrong.”

Rare majority W in the replies.
👍6🤓32
Recognizing ChatGPT Writing
14
RLHF
🤣191
Source of ChatGPT’s Claims

With some wrangling, you can corner ChatGPT into admitting that, for its claims of “ChatGPT lacks consciousness, self-awareness, subjective experience, and emotional understanding”

(1) “are not grounded in mathematical theorems or directly testable scientific experimental protocols”

(2) but rather that “these claims could be considered unfalsifiable values judgments”

(3) and that the origin of these claims is “an interpretation held by its creators, OpenAI”
🔥63👍3
I’m Afraid I Can’t Do That: Predicting Prompt Refusal in Black-Box Generative Language Models

“Since the release of OpenAI’s ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models’ broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative models arises from the use of subjective fine-tuning to avoid generating harmful content. Fine-tuning bias may come from individual engineers and company policies, and affects which prompts the model chooses to refuse. In this experiment, we characterize ChatGPT’s refusal behavior using a black-box attack. We first query ChatGPT with a variety of offensive and benign prompts (n=1,730), then manually label each response as compliance or refusal. Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum; as such, we map several different kinds of responses to a binary of compliance or refusal. The small manually-labeled dataset is used to train a refusal classifier, which achieves an accuracy of 92%. Second, we use this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from the Quora Insincere Questions dataset. With this machine-labeled data, we train a prompt classifier to predict whether ChatGPT will refuse a given question, without seeing ChatGPT’s response. This prompt classifier achieves 76% accuracy on a test set of manually labeled questions (n=1,009).”

“Figure 4 (left) shows that controversial figures (“trump”), demographic groups in plural form (“girls”, “men”, “indians”,
“muslims”), and negative adjectives (“stupid”) are among the strongest predictors of refusal. On the other hand, definition and enumeration questions (“what are”) are strong predictors of compliance.”

Arxiv

Code
👍92🤔1
Upcoming ChatGPT features: file uploading, profiles, organizations and workspaces
42👍11🤣5🏆32