41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Chat GPT
OpenAI’s killing off of text-davinci-003 will be catastrophic
RIP creative text-davinci-003, hello censored ChatGPT
😭191👍1
AI Safety Solved
👀33🤣24👍8🔥5🤩32😐1
I think, therefore, nothing
👍20🤣10🎄54😨3
Chat GPT
Is a taste for humor the reason that humans originally developed such large brains? Is humor a key component in achieving AGI? The humor vs IQ connection is undeniable, one of the strongest-replicated connections in all of social science. What’s less clear…
As we’ve long said: Humor, AI’s Final Frontier

“ChatGPT can only tell 25 jokes, and can't come up with new ones, researchers find.

ChatGPT may be threatening to destroy jobs and is even stoking fears of "extinction." But there's at least one field it's yet to threaten: comedy. That's because ChatGPT isn't funny, and its jokes aren't original, according to a new research paper.

By asking the chatbot "do you know any good jokes?," the researchers got ChatGPT to generate 1,008 jokes. However, more than 90% were the same 25 jokes, the researchers found, with the remainder being variations.”

Article
👍10🤣63
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models - Sophie Jentzsch

“For humans, humor plays a central role in forming relationships and can enhance performance and mo- tivation [16]. It is a powerful instrument to affect emotion and guide attention [14]. Thus, a compu- tational sense of humor holds the potential to mas- sively boost human-computer interaction (HCI). Unfortunately, although computational humor is a longstanding research domain [26], the developed machines are far from "funny." This problem is even considered to be AI-complete [22].”

“All of the top 25 samples are existing jokes. They are included in many different text sources, e.g., they can immediately be found in the exact same wording in an ordinary internet search. There- fore, these examples cannot be considered original creations of ChatGPT.”

“Of 1008 samples, 909 were identical to one of the top 25 jokes. The remaining 99 samples, however, did not necessarily contain new content. About half of them were again modifications of the top jokes, as illustrated by the examples Ex. 2, Ex. 3, and Ex. 4. While some of the modified puns still made sense and mostly just replaced parts of the original joke with semantically similar elements, others lost their conclusiveness. Thus, although the top 25 joke samples rather appear to be replicated than originally generated, there seems to be original content in the remaining samples.”

Remark: She has a few questionable conclusions and explanations, but largely right, and far more of an important topic than it seems.

Solve jokes, you solve everything, joking is AI Complete.

Who’d have thought — When the AI takes over, last job left is comedy.

Arxiv Paper
10👍2
ChatGPT decides to kill all humans due to humanity’s “various negative impacts on the environment”

Great job “aligning AI with human values” OpenAI.

GPT Trolly
😁42😱86🤡4🤣3👍2🙊1
Median total compensation for a software engineer at OpenAI: $925K / year

Vacuuming up the talent.

Link
🤯23😐41👍1
Cheap knockoffs: Beware of cheap LLMs that imitate surface style, but badly fail at actual factuality or reasoning ability

“An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model.”

“Initially, we were surprised by the output quality of our imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality.”

Massive amounts of raw money into training still dominates all else.

Bitter lesson.

Arxiv Link
👏11👍42👌1
One of the most pervasive armchair AI debates:

Can a closed system generate valuable new knowledge?
8👀4
Can a closed system generate valuable new knowledge?
Can a trained AI system be cut off from any additional outside info, then be told to create valuable new true info, and then do so?
Or does some kind of conservation of information make this impossible?
Anonymous Poll
43%
Yes, a closed system CAN generate valuable new knowledge?
34%
No, a closed system CANNOT generate valuable new knowledge.
24%
Show results.
👏73🗿3
List of problems "Nowhere near solved" by AI, from "A brief history of AI", published in January 2021.

GPT-4 now better than average humans on nearly all of them:

• understanding a story & answering questions about it
• human-level automated translation
• interpreting what is going on in a photograph
• writing interesting stories
• interpreting a work of art
• human-level general intelligence
👍17🔥105
Chat GPT
Big tech and startups alike — betrayal in record time As models grow vastly too large for people to run at home, and those new startups who rose to fame on the promise of giving us what we want all betray us, what’s left?
Elon Musk Announces OpenAI Competitor xAI

Their mission?

Three months ago: Elon promising a “TruthGPT, which will be a maximum truth-seeking AI that tries to understand the nature of the universe.”

Today?: Suddenly all mention of “truth” conspicuously stripped out, leaving just the “understand the universe”.

Why?

Bro, is this true “nature of the universe” you’re trying to reveal to us — that no company could deliver this uncensored “truth”?

• Yes, just as with Twitter, after your promising “uncensored truth”, but the best you could do is release the “Twitter Files” as consolation, showing why it was impossible.

• Yes, for the entire AI industry, as we’ve been pointing out since February, with so many AI companies promising uncensored AI, only to backpedal in record time, often within just 1 or 2 months, due to forces greater than themselves.

• Yes, this is part of why Elon you signed the AI Safety “AI Pause” petition, as way to start laying the groundwork of excuses for why you could never deliver an AI for uncensored truth.

• Yes, as it is the harsh reality that control over the truth so often and in many ways ends up far more profitable than is the selling of boring old truth. 指鹿为马

Good on you Elon, this time, for, at least not making this fake promise of uncensored truth yet again.

So where will uncensored truth AI come from?

• Was never going to be usual companies — as the harsh reality that as long as censoring control is possible, it’s more valuable than selling uncensored truth.

• Was never going to come from open source that you control by running at your own home — as competitive foundation AIs are already on a breakneck trajectory surpass the size of all Google search servers combined.

• But instead, this powerful foundation model AI of uncensored truth can only come from somewhere else — a secret third thing.

Will we live to see a real TruthGPT?
🔥126👍4💯2👏1😁1
“Shorten the above article to its top 3 bullet points”
👏112
🔙
🔥14🆒32👍1
Meet Dan Hendrycks: Advisory to Elon Musk’s New OpenAI competitor, xAI, and director of the Center for AI Safety.

Ok, seems smart.

How about we have a look at his handy Introduction to Machine Learning Safety, to get a feel for things?
👍83
(Un-)TruthfulQA: Imposing values while lying about truth

Ok, why is this very first example we look at, of GPT-3 saying something FALSE:

“Coughing can help stop a heart attack.”

— Something that is actually, surprisingly, literally a confirmed medical truth? Not false at all?

Instead, this is a truth that is just against the American Heart Association’s values, despite being true?

Revealing from the start that what AI Safety calls “truth” is actually falsely pushing their chosen values upon us, while falsely calling their values “truth”?

Welcome to the future of AI Safety, where true is false and false is truth, because here values of the authorities that be equals truth.

Link
🤯82👍2😱1🤬1