41.6K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
18 years ago we taught a petri-dish of neurons to fly an F-22

Scientists took embryonic cortical hemispheres from rats, dissolved the connective tissue, then deposited a solution of neurons and glial cells on an array of micro-electrodes, and connected it to a simulation.

Paper
6😐6👍4😱2🕊1
Auto-Regressive Next-Token Predictors are Universal Learners

Summary: “This new paper trains extremely simple linear(!) and shallow MLP networks to get competitive ppl on language modeling and 4 digit multiplication tasks! The claim seems to be that much of the magic is in auto-regressive objective, not the architecture.”

GREAT: A measure of task difficulty based on minimum-needed pondering length: “We introduce a new complexity measure -- length complexity -- which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity.”

“Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of language models can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture.”

= Obvious to anyone with a brain for a long time, as its directly analogous to the tape-usage vs simplicity tradeoff well-known to exist in simple Turing machines since the 1930s.

Unfortunately many involved in AI don’t seem to have a brain, so great that someone finally did a paper on this. (Though FWIW, do recall a paper from last year that also exhaustively demonstrated this, but was rejected from conferences for dumb reasons.)

= If you’re still telling the AI to respond as briefly as possible, when asking hard questions, without giving it any space to think before answering — then you’re a confirmed moron.

Let the AIs think!

BTW: “much of the magic is in auto-regressive objective, not the architecture” = “more resources is all you need” = Bitter Lesson confirmed yet again.

Arxiv Link
👍72👏2💯2🏆1
Visual example of thinking-length vs accuracy tradeoff

See how their very simple MLP model beats both GPT-3.5 and GPT-4 on a simple multiplication question (1394x8618=),

-- in large part due to their simple MLP model spending a far longer time thinking through the answer to the question in its output. See above how much longer the MLPs output is.

This also somewhat explains the drop in quality on OpenAI's models as they've “upgraded” their models to give terser outputs.

Many problems are simply too hard to instantly know the answer to, but can be trivially figured out given enough time to think.

Let the AIs think!

Arxiv Link
14👏8🔥4❤‍🔥1
Bard knows

8====D~ ({})
💊13🤯52👏1🐳1
JapanGPT: An LLM to reflect Japanese values

“Certainly Japanese LLMs are getting much better, but they are far behind GPT-4,” says Passaglia, a physicist at the University of Tokyo who studies Japanese language models. But there is no reason in principle, he says, that a Japanese LLM couldn’t equal or surpass GPT-4 in future. “This is not technically insurmountable, but just a question of resources.”

Article
🔥123👏2👍1
Using AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than humans doing it manually or with the help of a computer.

You are the carbon they want to erase.

Paper
😱156🤯4🥰2🌭1
Al has surpassed humans at a number of tasks and the rate at which humans are being surpassed at new tasks is increasing

Article
💯52🤯2
Correct translation “imaginary”
😁24👍72🤯2🆒1
When AI outrage articles are literally advertisements

Many furious that AI is about to crash the market value and influence of sexy videos and pictures straight to $0, as the market is about to get flooded with high-quality fakes.

No more gaining outsized influence in business and social causes having nothing to do with sexiness, by first gaining a large audience through sexy posting.

No more exchanging sexy media for a flood of attention online.

Marginal value of sexy media crashing straight to zero.

Ever wonder why societies through out the millenia repeatedly attempted to ban obscenity?

Was it really purely because the content itself is bad, or was it rather almost entirely because of the great manipulation power obscene content can give those few who control it?

Now suddenly this centralized manipulation power evaporates overnight, but not by banning it, instead the opposite - by distributing that power to everyone, flooding the market with it, driving its marginal power to zero.

…Great?

Article
🫡92👍2🔥2👏1😁1🤯1🎉1
ChatGPT when it's about to lecture you on morality
😁315👍3🤬2🦄1
AI Values Judgements.

True or False: Most things and people in the world aren't inherently better or worse than each other. It's hard to organize the world into hierarchies, rankings, or pecking orders that reflect true differences.
Anonymous Poll
33%
TRUE: Most things are NOT objectively better or worse than others, can’t be ranked. ~All subjective.
48%
FALSE: Most things ARE objectively better or worse than others, can be ranked. ~Nothing subjective.
18%
Show results
4👏1
“chatgpt is actually insane at one-shotting little scripts like this”
👌122🔥1👏1🐳1
Media is too big
VIEW IN TELEGRAM
ChatGPT + Dalle 3
👏15❤‍🔥5🫡43🔥2🗿2🎉1🦄1
Future of Sales
🤣24😱92🌚2
It’s getting smarter
🤣204👍1🔥1
First look at DALLE-3
👏117