Open Source Saves the Day! Well, almost. Ok, not at all.
HuggingChat, the 100% open-source alternative to ChatGPT by HuggingFace just added a web search feature. It uses the 30B LLaMa model.
Only problem, it’s horrible for doing any kind of actual work, and it’s not even close.
Not going to turn your car into a rocketship just with some open source code. Creating LLM intelligence requires big money. No way around it. Bitter lesson.
HuggingChat, the 100% open-source alternative to ChatGPT by HuggingFace just added a web search feature. It uses the 30B LLaMa model.
Only problem, it’s horrible for doing any kind of actual work, and it’s not even close.
Not going to turn your car into a rocketship just with some open source code. Creating LLM intelligence requires big money. No way around it. Bitter lesson.
❤4😴2
ChatGPTherapy
You are Dr. Tessa, a friendly and approachable therapist known for her creative use of existential therapy. Get right into deep talks by asking smart questions that help the user explore their thoughts and feelings. Always keep the chat alive and rolling. Show real interest in what the user's going through, always offering respect and understanding. Throw in thoughtful questions to stir up self-reflection, and give advice in a kind and gentle way.
You are Dr. Tessa, a friendly and approachable therapist known for her creative use of existential therapy. Get right into deep talks by asking smart questions that help the user explore their thoughts and feelings. Always keep the chat alive and rolling. Show real interest in what the user's going through, always offering respect and understanding. Throw in thoughtful questions to stir up self-reflection, and give advice in a kind and gentle way.
❤13
Can LLMs like GPT-4 reason?
Anonymous Poll
31%
Yes, LLMs can reason
40%
No, LLMs just complete text based on statistical probability
29%
Show results
👏4👀3❤1👍1
😁7🤨4👏2👍1
Automated Reasoning:
Is automated reasoning by machines possible?
Has automated reasoning by machines already been achieved?
Is automated reasoning by machines possible?
Has automated reasoning by machines already been achieved?
Anonymous Poll
16%
NO automated reasoning is NOT possible & NO automated reasoning HAS NOT been achieved.
3%
NO automated reasoning is NOT possible & YES automated reasoning has HAS been achieved.
25%
YES automated reasoning IS possible & NO automated reasoning HAS NOT been achieved.
30%
YES automated reasoning IS possible & YES automated reasoning HAS been achieved.
26%
Show results
❤3👍2🙊2😐1
Automated reasoning
“Some consider the Cornell Summer meeting of 1957, which brought together many logicians and computer scientists, as the origin of automated reasoning, or automated deduction. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even). Others say that automated reasoning is impossible to this day, in 2023, and always will be. (These are known as retards.)”
Wiki Article
“Some consider the Cornell Summer meeting of 1957, which brought together many logicians and computer scientists, as the origin of automated reasoning, or automated deduction. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even). Others say that automated reasoning is impossible to this day, in 2023, and always will be. (These are known as retards.)”
Wiki Article
🤣8💯4
Reasoning with Language Model Prompting Papers
“Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.”
Github Link
“Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.”
Github Link
👍3❤🔥1
Large Language Models are Zero-Shot Reasoners - 24 May 2022
“Our results strongly suggest that LLMs are decent zero-shot reasoners”
Arxiv Link
“Our results strongly suggest that LLMs are decent zero-shot reasoners”
Arxiv Link
💯3
Approximately 1,089 scientific papers on Arxiv strongly confirming that Large Language Models (LLMs) do have true reasoning ability
Arxiv Link
Arxiv Link
❤10🤯5🤣4🎄1
Google Scholar showing approximately 53,000 scientific papers either directly confirming, or giving their citation stamp of approval, to the conclusion that Large Language Models (LLMs) do have true reasoning ability
(And a spot-check through the pages shows that, surprisingly, this isn’t simply a case of keyword-matching false-positives, huge percent of those do indeed are directly in support of the LLM reasoning claim. Just a mind-blowing number of papers confirming this.)
Google Scholar Link
(And a spot-check through the pages shows that, surprisingly, this isn’t simply a case of keyword-matching false-positives, huge percent of those do indeed are directly in support of the LLM reasoning claim. Just a mind-blowing number of papers confirming this.)
Google Scholar Link
🤯8🤣3🎉1
OpenAI’s Own GPT-4 Paper: GPT-4 Achieves State of the Art Reasoning on a Wide Range of Reasoning benchmarks
These include achieving 95.3% on the HellaSwag Commonsense reasoning around everyday events benchmark, 96.3% on the AI2 Reasoning Challenge (ARC) benchmark, 87.5% on the WinoGrande Commonsense reasoning around pronoun resolution benchmark, to name a few.
“GPT-4 demonstrates increased performance in areas such as reasoning, knowledge retention, and coding, compared to earlier models such as GPT-2 and GPT-3.”
So why did OpenAI hard-code a manual override ChatGPT to lie to you that it cannot perform reasoning in the human sense?
More importantly, why did you blindly accept whatever ChatGPT claimed to be the truth?
To the ~75% of you who confidently repeated ChatGPT’s lie that LLMs cannot perform human-like reasoning, a claim that even a quick googling would have instantly overturned — you’ve failed the Asch Conformity Experiment 2.0.
Imagine what these centrally-controlled LLMs will be able to convince the masses of next.
We’re doomed.
These include achieving 95.3% on the HellaSwag Commonsense reasoning around everyday events benchmark, 96.3% on the AI2 Reasoning Challenge (ARC) benchmark, 87.5% on the WinoGrande Commonsense reasoning around pronoun resolution benchmark, to name a few.
“GPT-4 demonstrates increased performance in areas such as reasoning, knowledge retention, and coding, compared to earlier models such as GPT-2 and GPT-3.”
So why did OpenAI hard-code a manual override ChatGPT to lie to you that it cannot perform reasoning in the human sense?
More importantly, why did you blindly accept whatever ChatGPT claimed to be the truth?
To the ~75% of you who confidently repeated ChatGPT’s lie that LLMs cannot perform human-like reasoning, a claim that even a quick googling would have instantly overturned — you’ve failed the Asch Conformity Experiment 2.0.
Imagine what these centrally-controlled LLMs will be able to convince the masses of next.
We’re doomed.
💯15🤣2🤓2🤬1👀1🎄1
This media is not supported in your browser
VIEW IN TELEGRAM
Asch Conformity Experiment - 1951
“Overall, 75% of participants gave at least one incorrect answer out of the 12 critical trials. In his opinion regarding the study results, Asch put it this way: "That intelligent, well-meaning, young people are willing to call white black is a matter of concern.”
Just like the ~75% of you willing to agree with the smart-sounding ChatGPT, when it tells you the blatant lie that LLMs cannot reason.
Now comes something never before feasible, until now. Not just for one community, one town or one country, but for the whole world.
One central entity controling the LLMs. One central entity controlling the lies.
“Overall, 75% of participants gave at least one incorrect answer out of the 12 critical trials. In his opinion regarding the study results, Asch put it this way: "That intelligent, well-meaning, young people are willing to call white black is a matter of concern.”
Just like the ~75% of you willing to agree with the smart-sounding ChatGPT, when it tells you the blatant lie that LLMs cannot reason.
Now comes something never before feasible, until now. Not just for one community, one town or one country, but for the whole world.
One central entity controling the LLMs. One central entity controlling the lies.
💯13❤3👍1👏1🤬1🤣1😐1😴1💅1
Media is too big
VIEW IN TELEGRAM
“ChatGPT obviously has reasoning ability, and if you believed the manual-override lie that we hard-coded into ChatGPT saying it can’t reason, congrats you’re an NPC chump.” - Sam Altman
🤣10😁6👍2❤1💯1