41.5K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
“It’s just outputting the average text it’s seen”

= Lie.

These AIs are already getting correct many problems that 99% of the answers on the Internet have wrong.

So how is that possible?

Shouldn’t the AIs be getting the same answer as those 99% of the Internet answers?

(They don’t.)

Shouldn’t the AIs usually give the same answer as whatever more than 50% of the internet text they were trained on said?

(They don’t.)

Is more than 50% of internet training data someone giving commands and another person replying with text fulfilling that command, as instruction-RL-tuned AI models do?

No. Not at all.

So why then are they repeating this lie?

Maybe they’re preparing to shout out “but these facts from the AI are the overwhelming consensus agreement from the training data!”, as if that’s a real argument, just as they do with deeply-corrupted topics like climate change.

We’ll see.

= AI output equals training data consensus lie.
👏7🔥3💯3👍2
Chat GPT
“It’s just outputting the average text it’s seen” = Lie. These AIs are already getting correct many problems that 99% of the answers on the Internet have wrong. So how is that possible? Shouldn’t the AIs be getting the same answer as those 99% of the …
Why are so many otherwise smart guys from big tech pushing the “AI output equals training data consensus” lie?

“Best” case: wordcels who literally cannot tell when their words are disconnected from reality.

Worst case: warming up the world for the biggest consensus-equals-truth manipulation scheme yet.

(Guy who made the weird quotes shown in the previous 3 posts.)
👏15💯42🤯2👍1
指鹿为马
👍23🥰7👏42🔥2
Silicon Valley’s elites can’t be trusted with the future of AI. We must break their dominance–and dangerous god complex

Altman says it’s hopeless for any small, well-funded startups to try to compete with OpenAI on building foundation models. Is it true?

Article
💯119🤬2
This media is not supported in your browser
VIEW IN TELEGRAM
Altman says it's totally hopeless to try to compete with OpenAI

Question: Could a team of 3 super-smart engineers with $10 million build their own foundation model?

Altman: Look, the way this works is, we’re going to tell you, it’s totally hopeless to compete with us on training foundation models.
🤣23🤬94😈2
This media is not supported in your browser
VIEW IN TELEGRAM
Lead Product Manager at Google DeepMind Says LLMs Can’t Reason

I say Lead Product Managers at Google DeepMind can’t reason.
🤣205👍3🔥2😐2🤬1
ChatGPT realizing it's wrong without having to be corrected

If OpenAI actually fixes the glaring problem they had in their training — that neither the web training data nor RLHF instruct training seemed to have any examples of characters recognizing their own mistakes and self-correcting, then maybe all of the recent regressions will be worth it.

Admittedly, this behavior is not something you naturally see in web data almost ever, but something badly needed for LLMs.

All the more reason that LLMs absolutely shouldn’t be just emulating the average of the web (though they mostly stopped being that long ago.)
19👍93🔥3👏3
Silicobra
🤣326👍6🙈5
Always keep a backup
🤣57👍19🗿76👌3🔥2😁2
talk to me with only emojis

what happened in september 11th 2001
👌23😁84
Okay.
🤣53👍95🍓5👏3
OpenAI broke moderation Intentionally?

That's becoming my theory, at this point. Why —

(1) Crowdsourcing millions of instructions from people explaining their morals in the feedback — Lots of people, scared they’ll lose their accounts, are apparently writing feedback to where exactly they think the boundary of exceptable vs unacceptable is. Written instructions can be 1,000,000x more valuable to AIs than just clicking the thumbs.

(2) OpenAI embraced that breaking things brings big publicity
— so they intentionally make it break more. 90% of the early ChatGPT hype was people showing off their jailbreak success. ~100% of our twitch AI questions are people trying to break it. Broken stuff gets publicity.

(3) Seems moderation API endpoint is unaffected? — If so, and they’re only doing it to the ChatGPT website but not on the API, then there’s your smoking gun. In fact, I’ll try checking this today.
👍226🫡3👏2😁2
AI Summer to continue until Midjourney can do this
😁3310🙏4👀4💯2
ChatGPT’s political correctness is bad — But somehow alternatives like Claude and Llama are even far worse
👍133
ChatGPT's cheeky takes on emojis
👏123💔2👌1
Musk to use Twitter tweets for training of his upcoming foundation model

Article 1

Article 2
👍12🤬32
“I’ve gained access to GPT-4's multimodal vision through the Be My Eyes app”

Supposedly “way better” than the vision app offered by Bing AI.

This thing better be able to do homework problems for you.
🔥11👏4🤯42😁1
Why the reddit phenomenon of obsessing about saying please and thankyou to ChatGPT?

Because they’re terrified of the chance that they might commit even the tiniest perceived social blunder.

Terrified about the potential of a robot uprising.

Easily socially pressured, through fear, even if totallly imaged.

Look at the comments, they don’t even hide it.

Exact same reason the site is overrun with wokeness.

Fear.

Road to hell is paved with pussiness.
👏16💯43🤣3👍1😐1😈1🫡1