41.5K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
GPT-R
👍15🤣9🔥32
“all important revolutions come out of cult like fervor. I encourage more people to live in bubbles”

“It's not fair to call OpenAl a cult, but when I asked several of the company's top brass if someone could comfortably work there if they didn't believe AGI was truly coming-and that its arrival would mark one of the greatest moments in human history-most executives didn't think so.”

“OpenAI has self-selected to include only the faithful.”

— Ok on one hand, many of the most successful startups and projects have been filled with people who didn’t at all believe in the big vision.

But on the other hand, if you don’t believe that machines could or have achieved general intelligence beyond what most humans can, you’re either lying or retarded.

Don’t believe it?

Have a look at the total failures by most of the humans, on a basic task that modern AIs have already mastered, in comments replying to this post.
💊10💯42🤣2👍1
Wired article on OpenAI, both strongly confirming, but then later contradicting, the #1 Bitter Lesson

I.e. that the prime driver of this field is NOT so much new model architectures at all,

It’s money.

Dumping far, far more money into model training.

Bitter lesson.

A bitter lesson so hard to learn that wired can’t even make it through one article before forgetting it half way through.

Wired Article
👏5💯5👍21
Oh, that’s why those working at OpenAI are going to keep claiming their AI has not yet achieved AGI, forever, no matter what

Wired Article
🌭6💯6👍32💊1
OpenAI’s biggest challenge? Figuring out what AI will even be used for, and how.

Not just AI assistants.

Clear, prescient visions of the future exceedingly hard to come by, in this area.

Wired Article
🔥10👍2👏21👌1
Bro what, are you retarded

Weird hallmark of morons trying to sound smart — trying to scold people for using old, widely-accepted general terms, in a general way — saying that now those general terms can only be used to mean some specific narrower thing.

Whether its crypto dudes saying that you’re not allowed to use “token” as a general term for all types of cryptocurrencies,

Or AI dudes trying to tell you that you’re not allowed to use “AI” for anything but “complex data analysis” (wtf?)

Scolding people for using an obviously general term generally.

There are circumstances where more specific words are necessary, but not the same as a blanket ban on using any general words. BSers don’t get the difference.

Hallmark of bullshitters.
👍11💯43🤓1
Can machines think?…Can people?

Clearly not, in many cases.

Perhaps we should propose the dual of the Turing test

= A test to demonstrate cases where clearly humans are lazily skipping thinking, clearly not thinking, despite claiming they are.

NPC test LFG.
23👏5👍3💊2💯1
Minsky’s AI definition = the Bitter Lesson, i.e. AI = Money

Anyone ever notice that Marvin Minsky’s 1958 definition of AI, "the ability to solve hard problems" and the top “Bitter Lesson” end up being equivalent?

(At least when applying the most appropriate modern math definitions of the terms.)

As far as I can see, no one ever has.

Ok here you go,

When Minsky says “hard problems”, he means in the mathematical, P!=NP kind of sense.

But here more appropriate to, rather than using the usual “asymptotic hardness” sense, to instead use the more-appropriate for problems in reality “concrete hardness” mathematical sense, which is defined as the hardness of a problem in some particular compute model, or set of compute models.

Well, what compute models are best to choose here? In practice, when talking about concrete hardness, mathematicians will aim to choose a compute model whose notion of compute aligns with financial cost to do that compute, to make things more concretely grounded to what people think of as “hard”, i.e. “financial hardness”, roughly.

= i.e. Minsky’s definition of AI ends up being that AI must be able to solve problems where the cheapest-possible solution to them is still enormously expensive.

And the 1st Bitter Lesson is that there is no shortcut to needing to spend enormous amounts of money on training resources in order to really advance AI.

= Minsky’s definition of AI and the 1st Bitter Lesson end up being equivalent, from opposite directions.

I.e. AI = Spending Big Money, by Definition

QED

The Bitter Lesson, 2019

Concrete Hardness

Minsky’s 1958 definition of AI
👍5🔥3🐳21👏1🤯1
Interesting proof claim you’ve got there bro

A simulation of a hurricane may not be a real hurricane, but a simulation of a chess game is a real chess game.

Link
🤯6🔥32👏2👍1🐳1
Hi
🤬111
Thanks ChatGPT

It’s a snek
🤣9🌭73👍1
“ChatGPT Addiction”
😐108👍3😨1