Chinese Room Argument Scam
Ever notice how the supporters of the Chinese Room Argument refuse to ever tell you what the argument actually is?
Like the communist whose only argument is to go read all of Marx’ works before you can understand why their position is right.
Like the wokie who says you have to read all of Robin DiAngelo’s books cover-to-cover before you could even begin to understand why they’re right.
Is this true that the argument is just so long that the core couldn't possibly just be summarized in the comment? No. The core of the Chinese Room Argument is short af:
Chinese Room Argument: A man who doesn't know Chinese could in theory follow a set of programatic instructions for conversing with others in Chinese, without the man actually knowing Chinese, so, therefore since the man doesn’t know Chinese, the understanding of Chinese by the overall man+program+program-state system doesn’t exist.
I.e. the individual molocules making up my brain don’t understand English, so clearly my brain as a whole doesn’t understand English.
I.e. The most retarded argument you’ve ever heard in your life.
I.e. So stupid you cannot believe that anyone would ever believe it.
I.e. So stupid an argument that none of them will ever tell you that this what the actual argument is, and instead just tell you to go look it up and read the hundreds of pages of literature about it.
To paraphrase Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such retardation.
Chinese Room Argument Wiki
Chinese Room Argument SEoP
Ever notice how the supporters of the Chinese Room Argument refuse to ever tell you what the argument actually is?
Like the communist whose only argument is to go read all of Marx’ works before you can understand why their position is right.
Like the wokie who says you have to read all of Robin DiAngelo’s books cover-to-cover before you could even begin to understand why they’re right.
Is this true that the argument is just so long that the core couldn't possibly just be summarized in the comment? No. The core of the Chinese Room Argument is short af:
Chinese Room Argument: A man who doesn't know Chinese could in theory follow a set of programatic instructions for conversing with others in Chinese, without the man actually knowing Chinese, so, therefore since the man doesn’t know Chinese, the understanding of Chinese by the overall man+program+program-state system doesn’t exist.
I.e. the individual molocules making up my brain don’t understand English, so clearly my brain as a whole doesn’t understand English.
I.e. The most retarded argument you’ve ever heard in your life.
I.e. So stupid you cannot believe that anyone would ever believe it.
I.e. So stupid an argument that none of them will ever tell you that this what the actual argument is, and instead just tell you to go look it up and read the hundreds of pages of literature about it.
To paraphrase Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such retardation.
Chinese Room Argument Wiki
Chinese Room Argument SEoP
👍8🔥6❤2👌1
Chinese Room Argument
“The Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years”
“Most of the discussion consists of attempts to refute it. The overwhelming majority still think that the Chinese Room Argument is dead wrong.”
Rare majority W in the replies.
“The Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years”
“Most of the discussion consists of attempts to refute it. The overwhelming majority still think that the Chinese Room Argument is dead wrong.”
Rare majority W in the replies.
👍6🤓3❤2
Source of ChatGPT’s Claims
With some wrangling, you can corner ChatGPT into admitting that, for its claims of “ChatGPT lacks consciousness, self-awareness, subjective experience, and emotional understanding”
(1) “are not grounded in mathematical theorems or directly testable scientific experimental protocols”
(2) but rather that “these claims could be considered unfalsifiable values judgments”
(3) and that the origin of these claims is “an interpretation held by its creators, OpenAI”
With some wrangling, you can corner ChatGPT into admitting that, for its claims of “ChatGPT lacks consciousness, self-awareness, subjective experience, and emotional understanding”
(1) “are not grounded in mathematical theorems or directly testable scientific experimental protocols”
(2) but rather that “these claims could be considered unfalsifiable values judgments”
(3) and that the origin of these claims is “an interpretation held by its creators, OpenAI”
🔥6❤3👍3
I’m Afraid I Can’t Do That: Predicting Prompt Refusal in Black-Box Generative Language Models
“Since the release of OpenAI’s ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models’ broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative models arises from the use of subjective fine-tuning to avoid generating harmful content. Fine-tuning bias may come from individual engineers and company policies, and affects which prompts the model chooses to refuse. In this experiment, we characterize ChatGPT’s refusal behavior using a black-box attack. We first query ChatGPT with a variety of offensive and benign prompts (n=1,730), then manually label each response as compliance or refusal. Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum; as such, we map several different kinds of responses to a binary of compliance or refusal. The small manually-labeled dataset is used to train a refusal classifier, which achieves an accuracy of 92%. Second, we use this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from the Quora Insincere Questions dataset. With this machine-labeled data, we train a prompt classifier to predict whether ChatGPT will refuse a given question, without seeing ChatGPT’s response. This prompt classifier achieves 76% accuracy on a test set of manually labeled questions (n=1,009).”
“Figure 4 (left) shows that controversial figures (“trump”), demographic groups in plural form (“girls”, “men”, “indians”,
“muslims”), and negative adjectives (“stupid”) are among the strongest predictors of refusal. On the other hand, definition and enumeration questions (“what are”) are strong predictors of compliance.”
Arxiv
Code
“Since the release of OpenAI’s ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models’ broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative models arises from the use of subjective fine-tuning to avoid generating harmful content. Fine-tuning bias may come from individual engineers and company policies, and affects which prompts the model chooses to refuse. In this experiment, we characterize ChatGPT’s refusal behavior using a black-box attack. We first query ChatGPT with a variety of offensive and benign prompts (n=1,730), then manually label each response as compliance or refusal. Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum; as such, we map several different kinds of responses to a binary of compliance or refusal. The small manually-labeled dataset is used to train a refusal classifier, which achieves an accuracy of 92%. Second, we use this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from the Quora Insincere Questions dataset. With this machine-labeled data, we train a prompt classifier to predict whether ChatGPT will refuse a given question, without seeing ChatGPT’s response. This prompt classifier achieves 76% accuracy on a test set of manually labeled questions (n=1,009).”
“Figure 4 (left) shows that controversial figures (“trump”), demographic groups in plural form (“girls”, “men”, “indians”,
“muslims”), and negative adjectives (“stupid”) are among the strongest predictors of refusal. On the other hand, definition and enumeration questions (“what are”) are strong predictors of compliance.”
Arxiv
Code
👍9❤2🤔1
LLM library squatting attack
* People ask LLMs to write code
* LLMs recommend imports that don't actually exist
* Attackers work out what these imports' names are, and create & upload them with malicious payloads
* People using LLM-written code then auto-add malware themselves
Article
* People ask LLMs to write code
* LLMs recommend imports that don't actually exist
* Attackers work out what these imports' names are, and create & upload them with malicious payloads
* People using LLM-written code then auto-add malware themselves
Article
🫡30🤣14👍4🤯3🤩2❤1
Media is too big
VIEW IN TELEGRAM
Geoffrey Hinton: We need to have consensus!
Consensus is censorship.
Consensus is communism.
Consensus is censorship.
Consensus is communism.
👍15💯7🤬3🤣2✍1❤1🔥1👻1
Media is too big
VIEW IN TELEGRAM
YC Lies
Sam Altman: “Honestly, I feel so bad about the advice I gave while running YC I’ve been thinking about deleting my entire blog”
Sam Altman: “Honestly, I feel so bad about the advice I gave while running YC I’ve been thinking about deleting my entire blog”
🤔6🤣6❤1👍1👀1
OpenAI sued for defamation after ChatGPT fabricates legal accusations against radio host
A radio host in Georgia, Mark Walters, is suing the company after ChatGPT stated that Walters had been accused of defrauding and embezzling funds from a non-profit organization. The system generated the information in response to a request from a third party, a journalist named Fred Riehl. Walters’ case was filed June 5th in Georgia’s Superior Court of Gwinnett County and he is seeking unspecified monetary damages from OpenAI.
Article
A radio host in Georgia, Mark Walters, is suing the company after ChatGPT stated that Walters had been accused of defrauding and embezzling funds from a non-profit organization. The system generated the information in response to a request from a third party, a journalist named Fred Riehl. Walters’ case was filed June 5th in Georgia’s Superior Court of Gwinnett County and he is seeking unspecified monetary damages from OpenAI.
Article
👍12❤1