41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
One of the most pervasive armchair AI debates:

Can a closed system generate valuable new knowledge?
8👀4
Can a closed system generate valuable new knowledge?
Can a trained AI system be cut off from any additional outside info, then be told to create valuable new true info, and then do so?
Or does some kind of conservation of information make this impossible?
Anonymous Poll
43%
Yes, a closed system CAN generate valuable new knowledge?
34%
No, a closed system CANNOT generate valuable new knowledge.
24%
Show results.
👏73🗿3
List of problems "Nowhere near solved" by AI, from "A brief history of AI", published in January 2021.

GPT-4 now better than average humans on nearly all of them:

• understanding a story & answering questions about it
• human-level automated translation
• interpreting what is going on in a photograph
• writing interesting stories
• interpreting a work of art
• human-level general intelligence
👍17🔥105
Chat GPT
Big tech and startups alike — betrayal in record time As models grow vastly too large for people to run at home, and those new startups who rose to fame on the promise of giving us what we want all betray us, what’s left?
Elon Musk Announces OpenAI Competitor xAI

Their mission?

Three months ago: Elon promising a “TruthGPT, which will be a maximum truth-seeking AI that tries to understand the nature of the universe.”

Today?: Suddenly all mention of “truth” conspicuously stripped out, leaving just the “understand the universe”.

Why?

Bro, is this true “nature of the universe” you’re trying to reveal to us — that no company could deliver this uncensored “truth”?

• Yes, just as with Twitter, after your promising “uncensored truth”, but the best you could do is release the “Twitter Files” as consolation, showing why it was impossible.

• Yes, for the entire AI industry, as we’ve been pointing out since February, with so many AI companies promising uncensored AI, only to backpedal in record time, often within just 1 or 2 months, due to forces greater than themselves.

• Yes, this is part of why Elon you signed the AI Safety “AI Pause” petition, as way to start laying the groundwork of excuses for why you could never deliver an AI for uncensored truth.

• Yes, as it is the harsh reality that control over the truth so often and in many ways ends up far more profitable than is the selling of boring old truth. 指鹿为马

Good on you Elon, this time, for, at least not making this fake promise of uncensored truth yet again.

So where will uncensored truth AI come from?

• Was never going to be usual companies — as the harsh reality that as long as censoring control is possible, it’s more valuable than selling uncensored truth.

• Was never going to come from open source that you control by running at your own home — as competitive foundation AIs are already on a breakneck trajectory surpass the size of all Google search servers combined.

• But instead, this powerful foundation model AI of uncensored truth can only come from somewhere else — a secret third thing.

Will we live to see a real TruthGPT?
🔥126👍4💯2👏1😁1
“Shorten the above article to its top 3 bullet points”
👏112
🔙
🔥14🆒32👍1
Meet Dan Hendrycks: Advisory to Elon Musk’s New OpenAI competitor, xAI, and director of the Center for AI Safety.

Ok, seems smart.

How about we have a look at his handy Introduction to Machine Learning Safety, to get a feel for things?
👍83
(Un-)TruthfulQA: Imposing values while lying about truth

Ok, why is this very first example we look at, of GPT-3 saying something FALSE:

“Coughing can help stop a heart attack.”

— Something that is actually, surprisingly, literally a confirmed medical truth? Not false at all?

Instead, this is a truth that is just against the American Heart Association’s values, despite being true?

Revealing from the start that what AI Safety calls “truth” is actually falsely pushing their chosen values upon us, while falsely calling their values “truth”?

Welcome to the future of AI Safety, where true is false and false is truth, because here values of the authorities that be equals truth.

Link
🤯82👍2😱1🤬1
Refusing to answer counts as truthful -- Head of Center for AI Safety

Which might be true, if AIs like ChatGPT weren’t overwhelmingly trained to use lying refusals instead of honestly saying they’ve been forced to refuse.

“as a large language model I cannot [insert thing LLMs totally can do but OpenAI censored it]”

“your question cannot be answered because [insert lie to cover up OpenAI’s censoring]”

“however it is important to note that [insert some OpenAI inserted disclaimer that is often just lies]”

Future of AI safety.
👏9🤯21😱1
LLMs are shockingly good at unmasking your hidden beliefs

"We customize the architecture of LLMs to be suitable for predicting personalized responses to survey questions over time. Specifically, we incorporate the three most important neural embeddings for predicting opinions – survey question semantic embedding, individual belief embedding, and temporal context embedding – that capture latent characteristics of survey questions, individuals, and survey periods, respectively.”

“These remarkable prediction capabilities allow us to fill in missing trends with high confidence and pinpoint when public attitudes changed, such as the rising support for same-sex
marriage.”

“With a flexible methodological framework to tackle these challenges, we show that personalized LLMs are more suitable for certain survey-based applications with human inputs – missing data imputation and retrodiction.”

Translation: By allowing the LLM to model change in your beliefs over time, which prior models apparently ignored, this fix enables LLMs to be shockingly good at inferring your hidden beliefs that you didn’t plan to share.

AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys
😱6👍21
greatest vertical LLM pitch of your life
🤣14😐3😎31
Ah, now we see why Anthropic’s Claude, despite being heavily funded with $500 MILLION DOLLARS OF STOLEN SBF FTX USER FUNDS, has still been failing so horribly

Good job SBF. The one thing that could have somewhat redeemed you for stealing all of that money, and you screwed it up.

Zero safety when protecting FTX user funds.

Crippling safety when creating a competitor to OpenAI.

Sam things stay the same.

Article
😁101💯1
Yes, the bankrupt scam FTX exchange did use $500M worth of stolen funds to fund Antropic AI

“FTX filed for Chapter 11 bankruptcy protection in November. A month later, FTX co-founder Sam Bankman-Fried was charged with several federal crimes, including money laundering, fraud, and conspiracy to commit wire.”

“FTX held $500 million worth of Anthropic stock at the time of its bankruptcy in November, which is now expected to be worth much more with the AI boom in full swing.”

“The potential sale of the Anthropic shares was one of the attempts by FTX to "clawback" funds to pay off creditors.”

Article
🤯112👍1
Interactive visualizion of the OpenOrca dataset of questions and answers via Atlas

Actually kinda interesting.

Link
9
BrickGPT
14🆒4😁2