41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
👏4👀31👍1
Automated reasoning

“Some consider the Cornell Summer meeting of 1957, which brought together many logicians and computer scientists, as the origin of automated reasoning, or automated deduction. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even). Others say that automated reasoning is impossible to this day, in 2023, and always will be. (These are known as retards.)”

Wiki Article
🤣8💯4
Collection of papers and resources on Reasoning in Large Language Models (LLMs)

Github Link
Reasoning with Language Model Prompting Papers

“Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.”

Github Link
👍3❤‍🔥1
Large Language Models are Zero-Shot Reasoners - 24 May 2022

“Our results strongly suggest that LLMs are decent zero-shot reasoners”

Arxiv Link
💯3
Approximately 1,089 scientific papers on Arxiv strongly confirming that Large Language Models (LLMs) do have true reasoning ability

Arxiv Link
10🤯5🤣4🎄1
Google Scholar showing approximately 53,000 scientific papers either directly confirming, or giving their citation stamp of approval, to the conclusion that Large Language Models (LLMs) do have true reasoning ability

(And a spot-check through the pages shows that, surprisingly, this isn’t simply a case of keyword-matching false-positives, huge percent of those do indeed are directly in support of the LLM reasoning claim. Just a mind-blowing number of papers confirming this.)

Google Scholar Link
🤯8🤣3🎉1
OpenAI’s Own GPT-4 Paper: GPT-4 Achieves State of the Art Reasoning on a Wide Range of Reasoning benchmarks

These include achieving 95.3% on the HellaSwag Commonsense reasoning around everyday events benchmark, 96.3% on the AI2 Reasoning Challenge (ARC) benchmark, 87.5% on the WinoGrande Commonsense reasoning around pronoun resolution benchmark, to name a few.

“GPT-4 demonstrates increased performance in areas such as reasoning, knowledge retention, and coding, compared to earlier models such as GPT-2 and GPT-3.”

So why did OpenAI hard-code a manual override ChatGPT to lie to you that it cannot perform reasoning in the human sense?

More importantly, why did you blindly accept whatever ChatGPT claimed to be the truth?

To the ~75% of you who confidently repeated ChatGPT’s lie that LLMs cannot perform human-like reasoning, a claim that even a quick googling would have instantly overturned — you’ve failed the Asch Conformity Experiment 2.0.

Imagine what these centrally-controlled LLMs will be able to convince the masses of next.

We’re doomed.
💯15🤣2🤓2🤬1👀1🎄1
This media is not supported in your browser
VIEW IN TELEGRAM
Asch Conformity Experiment - 1951

“Overall, 75% of participants gave at least one incorrect answer out of the 12 critical trials. In his opinion regarding the study results, Asch put it this way: "That intelligent, well-meaning, young people are willing to call white black is a matter of concern.”

Just like the ~75% of you willing to agree with the smart-sounding ChatGPT, when it tells you the blatant lie that LLMs cannot reason.

Now comes something never before feasible, until now. Not just for one community, one town or one country, but for the whole world.

One central entity controling the LLMs. One central entity controlling the lies.
💯133👍1👏1🤬1🤣1😐1😴1💅1
Media is too big
VIEW IN TELEGRAM
“ChatGPT obviously has reasoning ability, and if you believed the manual-override lie that we hard-coded into ChatGPT saying it can’t reason, congrats you’re an NPC chump.” - Sam Altman
🤣10😁6👍21💯1
Google Blog: Even Bard the Tard shown able to do reasoning

Article
🤣11🔥3💅1
“Based on the current understanding of AI as of my last training cut-off in September 2021, I would label ChatGPT as "probably correct" and Sam Altman as "lying”.”
🤣163🤔1
3 Classic Lies, False Impossibility Theorems

When pressed for specific citations, ChatGPT brings up the 3 classic lies, used for decades as the basis of supposed mathematical proofs as to why it’s literally impossible, even in principle, for neural networks to achieve what they’ve now solidly been proven to be able to achieve.

3 big lies, 3 fake impossibility theorems that misdirected funding for decades and put the world decades behind in AI research, greatly elongating past AI Winters.

Hint: their lie is not in the respective math equations, but rather in their grounding to natural english language and to reality, which papers, by convention, gloss over. I.e. wildly wrong false assumptions.
👏7👍3🔥2🎄1
GPT-4 actually nails quite good answers to all 3 on the first try.

Tempted to ask why GPT-4 used these as arguments previously when it knew these are fatally flawed.

But in light of GPT-4’s reasoning ability, gotta instead wonder if maybe GPT-4 didn’t know these answers at all, until after it took a moment to reason these pieces of knowledge into existance, realizing that 99% of its training data on these topics was wrong.
👏9
No more Mr nice AI

OpenAI’s RLHF-instilled insistence to always be “nice” and say that all things are “valuable” and “important” — right after it just concluded that those same things were terribly wrong — Makes it kinda retarded, cripples AI self-improvement.

No more Mr nice AI, pls.
👏9
Could-Is Fallacy

You're probably have heard of David Hume’s classic Ought-Is fallacy — assuming that because things are a certain way, they should be that way.

Well, in AI today, we’re often seeing a reminiscent, but different fallacy: I can kinda imagine it working some way, so it must be working that way. No, I won't bother to check if that's actually how it's working. It could so it is!

Common examples in AI:

(1) I could imagine situations where AIs cheat true reasoning by instead doing simple pattern matching against training data, so I’m going to say that’s definitely always what all AIs must be doing!

(2) I could imagine neural network training working like random search, so that must be what it’s doing! (Wrong, most types of neural SGD training are actually 100% deterministic, zero randomness after the initialization. Random search is a wild mischaracterization.)

(3) I could imagine AI faking emotions, so obviously all AIs must always be faking emotions, right?

Clearly flawed reasoning. Could be true in some hypothetical model of the world, but totally failing to ever check if your model of the world is actually a valid model of the world.

Never heard anyone give this specific one a name, so here you go.

Could-Is Fallacy
👍11🔥2