41.6K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
AI Winter Is Not Coming: OpenAI Surpasses $1 Billion Revenue

β€œThat’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.”

Article
πŸ‘20πŸ‘3❀1
Only way today’s LLMs can reliably generate jokes
πŸ‘11😁2❀1
ChatGPT’s new image generation

prompt: girl at the beach
🀣24😁4😱2❀1πŸ’―1
US Air Force to spend $6B to build 2,000 AI-powered drones

Article
😐16😁3😱3πŸ‘2🀣2πŸ—Ώ2❀1
Creatives on life support
πŸ‘Œ9❀3πŸ‘3🀬2😁1
Wonder why
🌚17😁5πŸ‘4🌭3😐2❀1🍌1
When the Bing suggestions hit deep
😁16🀣8❀5πŸ‘Œ5
oh :(
😒41😁18❀7🌚5🀣3πŸ‘2
Probability-based arguments, favorite of the midwits and the safetyism scams, always lies

Link
⚑13πŸ’―8❀6πŸ‘3πŸ—Ώ3πŸ₯°2
Chat GPT
Problem 2: There are now TONS of CAPTCHA-solving-by cheap humans API services, that you can easily hook into your spam code in 5 minutes Even if you do find this elusive type of problem that 100% of humans can easily do but machines cannot -- Still doesn't…
Plebbit: Solving the social network censorship problem. Very nice. No wait, total joke.

Bros, captchas for solving spam?

Especially in the decentralized setting where captchas never were viable?

And now, with AI and cheap human outsourcing services that really kill captchas dead for good?

No. Not even close enough to be fixable. Not even the right general direction.

Total joke. No.

Plebbit
πŸ‘9❀3πŸ’―3πŸ‘1πŸ™ˆ1
LLM arithmetic
😁22🀣18❀6πŸ‘2πŸ‘2πŸ‘Œ1
LLMs: Great or terrible at math?

LLMs are terrible at: arithmetic, i.e. simple mechanical calculations that the cheapest calculator could do

LLMs are great at: the LANGUAGE of math β€” i.e. the translation of human natural language descriptions into the analogous math language, (which then can be passed off to the mechanical tools.)

Just like most humans.

Did you know: bottlenecks / blinding / inability to access certain information / preventing of memorization β€” e.g. LLMs surprising inability to effectively work with even a tiny number of digits without losing track and messing it all up β€” is seen as a critical property of neural network architecture design that enables them to achieve high intelligence?

I.e. the better the memorization, the worse the retardation β€” all else being equal. e.g. for the same model size.

(Mental arithmetic solving = extremely memorization-heavy)

Intuitively, the blinding of the model from being able to take the easy memorizing & repeating shortcut β€” is exactly what forces it to do the harder task of figuring out how to solve the hard problems.

Know what else absolutely dominates humans in memorization?

Apes.

Interestingly, in humans there is a massive gender difference in blind memorization ability β€” but apparently no difference across races.

Socially, consider what this all means in regard to schools’ & standardized tests’ slow multi-decade shift away from measuring general intelligence, toward just measuring blind memorization ability.

What a coincidence, that those who are on the path to being ~100% of the teachers, happen to be on the side that excels at memorization.

Now you might see how many of the β€œsmart”, who excel in academia through blind memorization can paradoxically seem so stupid at basic reasoning.

Memorization & intelligence β€” not only separate, but directly at odds, and neural network architecture design gives us big insight into exactly why that is.

Be happy your LLM alone is bad at arithmetic, because if it wasn’t, it’d be much dumber.
πŸ‘10😱3❀1πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
Great memorization β†’ great retardation
πŸ‘17πŸ‘€4πŸ™ˆ4❀2😁1
Cost of AI is too d*** high
😁9πŸ‘Œ2❀1
This media is not supported in your browser
VIEW IN TELEGRAM
Fellas, AI has surpassed the babies

β€œFor robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific engineering, or else lack the data-efficiency to do so in an amount of time that enables practical use. In this work we explore dense tracking as a representational vehicle to allow faster and more general learning from demonstration. Our approach utilizes TrackAny-Point (TAP) models to isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration. We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together, all from demonstrations that can be collected in minutes.”

Arxiv PDF
πŸ”₯16⚑5❀4