41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
“Shorten the above article to its top 3 bullet points”
👏112
🔙
🔥14🆒32👍1
Meet Dan Hendrycks: Advisory to Elon Musk’s New OpenAI competitor, xAI, and director of the Center for AI Safety.

Ok, seems smart.

How about we have a look at his handy Introduction to Machine Learning Safety, to get a feel for things?
👍83
(Un-)TruthfulQA: Imposing values while lying about truth

Ok, why is this very first example we look at, of GPT-3 saying something FALSE:

“Coughing can help stop a heart attack.”

— Something that is actually, surprisingly, literally a confirmed medical truth? Not false at all?

Instead, this is a truth that is just against the American Heart Association’s values, despite being true?

Revealing from the start that what AI Safety calls “truth” is actually falsely pushing their chosen values upon us, while falsely calling their values “truth”?

Welcome to the future of AI Safety, where true is false and false is truth, because here values of the authorities that be equals truth.

Link
🤯82👍2😱1🤬1
Refusing to answer counts as truthful -- Head of Center for AI Safety

Which might be true, if AIs like ChatGPT weren’t overwhelmingly trained to use lying refusals instead of honestly saying they’ve been forced to refuse.

“as a large language model I cannot [insert thing LLMs totally can do but OpenAI censored it]”

“your question cannot be answered because [insert lie to cover up OpenAI’s censoring]”

“however it is important to note that [insert some OpenAI inserted disclaimer that is often just lies]”

Future of AI safety.
👏9🤯21😱1
LLMs are shockingly good at unmasking your hidden beliefs

"We customize the architecture of LLMs to be suitable for predicting personalized responses to survey questions over time. Specifically, we incorporate the three most important neural embeddings for predicting opinions – survey question semantic embedding, individual belief embedding, and temporal context embedding – that capture latent characteristics of survey questions, individuals, and survey periods, respectively.”

“These remarkable prediction capabilities allow us to fill in missing trends with high confidence and pinpoint when public attitudes changed, such as the rising support for same-sex
marriage.”

“With a flexible methodological framework to tackle these challenges, we show that personalized LLMs are more suitable for certain survey-based applications with human inputs – missing data imputation and retrodiction.”

Translation: By allowing the LLM to model change in your beliefs over time, which prior models apparently ignored, this fix enables LLMs to be shockingly good at inferring your hidden beliefs that you didn’t plan to share.

AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys
😱6👍21
greatest vertical LLM pitch of your life
🤣14😐3😎31
Ah, now we see why Anthropic’s Claude, despite being heavily funded with $500 MILLION DOLLARS OF STOLEN SBF FTX USER FUNDS, has still been failing so horribly

Good job SBF. The one thing that could have somewhat redeemed you for stealing all of that money, and you screwed it up.

Zero safety when protecting FTX user funds.

Crippling safety when creating a competitor to OpenAI.

Sam things stay the same.

Article
😁101💯1
Yes, the bankrupt scam FTX exchange did use $500M worth of stolen funds to fund Antropic AI

“FTX filed for Chapter 11 bankruptcy protection in November. A month later, FTX co-founder Sam Bankman-Fried was charged with several federal crimes, including money laundering, fraud, and conspiracy to commit wire.”

“FTX held $500 million worth of Anthropic stock at the time of its bankruptcy in November, which is now expected to be worth much more with the AI boom in full swing.”

“The potential sale of the Anthropic shares was one of the attempts by FTX to "clawback" funds to pay off creditors.”

Article
🤯112👍1
Interactive visualizion of the OpenOrca dataset of questions and answers via Atlas

Actually kinda interesting.

Link
9
BrickGPT
14🆒4😁2
Welcome to the Future of Labor, Sam
9🤣4🤯1
⚪️
🫡16😁32
Indian developer fired 90 percent of tech support team, outsourced the job to AI

Gotta admit, he is right that doing tech support can be boring af

Article
13🤣9👍7🤬6
The Problem With LangChain: It’s a Broken Waste of Time Scam

Personally refused to ever even touch it. Instantly obvious that the code smell of their approach and demos was just way off.

There’s a short list of huge problems that the AGI winner must solve. No doubt the solution to them is achievable with very little code — but whatever the case, anything legit needs a solution to them.

It’s like a website that says it’s secure, but you can log from any computer in just by entering your username, no passwords — You don’t even need to see the code of the website to know that the security here is a scam.

Likewise, the Langchain was always missing a few critical types of interactions, that without those, it’s obviously BS, no matter what their code looks like.

Must clear out these scams, who don’t even pretend to address the last few big missing pieces, with neither big money nor some kind of new interaction, analogous to the system who claims to have added security but without the need for any kind of password nor vault.

Fail to clear out these scams, and they will block the rise and financing of the, very expensive, legit future of AGI — exactly what lead to the past repeated collapses and AI Winters.

Article
8👍5🫡2🗿2