41.6K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Massive Resources Are All you Need

Both for animals and machines.

Not about more complicated architecture.

Almost entirely about just dumping vastly more resources in, to let it do far more compute.

BuT tHaT’s NoT SuStAiNaBle!!

Really bro? Then you go ahead and be the first to constrict the bloodflow to your obscenely resource-hungry brain. Be the first to jump off of this “unsustainable” curve that your brain is sitting right at the top of.

Blood-Thirsty Brains Key To Evolution Of Human Intelligence

Bitter Lesson of AI Intelligence
👍8😁5💯41👌1
Why the reverse Flynn Effect — Of IQ increasing for decades, but suddenly reversing ever since the 90’s?

Is it because we’re too addicted to tech which makes us lazy?

Immigration of dummies?

Climate change?

No.

Obesity, overwhelmingly.

Massively increased obesity in many countries → Obesity massively decreasing cerebral blood flow → which has an extremely strong negative effect on general intelligence → Massively decreased average intelligence.

Reverse flynn effect solved.

Brain needs power, obesity restricts it.

But hey, with human intelligence dropping so fast, this means we technically get to reach AGI that much sooner!

Who knew that the “singularity” was actually a reference to the size of yo momma on the day that AI finally surpasses mankind.

Tehnological Singularity

Yo Momma Singularity
👏22😁11👍109🤯5❤‍🔥2🎉2😈2🔥1👌1💔1
Did GPT-4 just teach itself text recognition?
👏3316👍16🤯10😱3😁2🎉2🤣1
Will Google screw up Gemini AI as much as they screwed up with Google Analytics 4?

…And nearly everything else they tried to create in-house instead of getting through external startup acquisition over the past 15 years
🤯133👍2😁2
Will Google's Gemini beat GPT4 in terms of capabilities on release?

Manifold Markets Page
👍527💯1🤣1
LLMs to enable self-driving cars to consciously think and plan via internal monologue

“We use natural language to enhance the learning and explainability of our foundation driving models. In this blog, we introduce LINGO-1, an open-loop driving commentator that combines vision, language and action to enhance how we interpret, explain and train our foundation driving models.”

“We can also use language to probe models with questions about the driving scene to more intuitively understand what it comprehends. This capability can provide insights that could help us improve our driving models’ reasoning and decision-making capabilities. Equally exciting, VLAMs open up the possibility of interacting with driving models through dialogue, where users can ask autonomous vehicles what they are doing and why. This could significantly impact the public’s perception of this technology, building confidence and trust in its capabilities.”

“In addition to having a foundation driving model with broad capabilities, it is also eminently desirable for it to efficiently learn new tasks and quickly adapt to new domains and scenarios where we have small training samples. Here is where natural language could add value in supporting faster learning. For instance, we can imagine a scenario where a corrective driving action is accompanied by a natural language description of incorrect and correct behaviour in this situation. This extra supervision can enhance few-shot adaptations of the foundation model. With these ideas in mind, our Science team is exploring using natural language to build foundation models for end-to-end autonomous driving.”

LINGO-1: Exploring Natural Language for Autonomous Driving
🔥14👍4🤯43💅2🎃1🤝1
OpenAI reminding everyone to switch from the less-censored text-davinci-003, to the heavily-censored, crippled gpt-3.5-turbo-instruct

Starting January 4, 2024, text-davinci-003 will no longer be available.

Announcement
😢14🤬83
18 years ago we taught a petri-dish of neurons to fly an F-22

Scientists took embryonic cortical hemispheres from rats, dissolved the connective tissue, then deposited a solution of neurons and glial cells on an array of micro-electrodes, and connected it to a simulation.

Paper
6😐6👍4😱2🕊1
Auto-Regressive Next-Token Predictors are Universal Learners

Summary: “This new paper trains extremely simple linear(!) and shallow MLP networks to get competitive ppl on language modeling and 4 digit multiplication tasks! The claim seems to be that much of the magic is in auto-regressive objective, not the architecture.”

GREAT: A measure of task difficulty based on minimum-needed pondering length: “We introduce a new complexity measure -- length complexity -- which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity.”

“Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of language models can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture.”

= Obvious to anyone with a brain for a long time, as its directly analogous to the tape-usage vs simplicity tradeoff well-known to exist in simple Turing machines since the 1930s.

Unfortunately many involved in AI don’t seem to have a brain, so great that someone finally did a paper on this. (Though FWIW, do recall a paper from last year that also exhaustively demonstrated this, but was rejected from conferences for dumb reasons.)

= If you’re still telling the AI to respond as briefly as possible, when asking hard questions, without giving it any space to think before answering — then you’re a confirmed moron.

Let the AIs think!

BTW: “much of the magic is in auto-regressive objective, not the architecture” = “more resources is all you need” = Bitter Lesson confirmed yet again.

Arxiv Link
👍72👏2💯2🏆1
Visual example of thinking-length vs accuracy tradeoff

See how their very simple MLP model beats both GPT-3.5 and GPT-4 on a simple multiplication question (1394x8618=),

-- in large part due to their simple MLP model spending a far longer time thinking through the answer to the question in its output. See above how much longer the MLPs output is.

This also somewhat explains the drop in quality on OpenAI's models as they've “upgraded” their models to give terser outputs.

Many problems are simply too hard to instantly know the answer to, but can be trivially figured out given enough time to think.

Let the AIs think!

Arxiv Link
14👏8🔥4❤‍🔥1
Bard knows

8====D~ ({})
💊13🤯52👏1🐳1
JapanGPT: An LLM to reflect Japanese values

“Certainly Japanese LLMs are getting much better, but they are far behind GPT-4,” says Passaglia, a physicist at the University of Tokyo who studies Japanese language models. But there is no reason in principle, he says, that a Japanese LLM couldn’t equal or surpass GPT-4 in future. “This is not technically insurmountable, but just a question of resources.”

Article
🔥123👏2👍1
Using AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than humans doing it manually or with the help of a computer.

You are the carbon they want to erase.

Paper
😱156🤯4🥰2🌭1