Axis of Ordinary
3.72K subscribers
4.34K photos
1.22K videos
6 files
5.33K links
Memetic and cognitive hazards.

Substack: https://axisofordinary.substack.com/
Download Telegram
I still find it hard to deal with this level of time inconsistency. For example, consider all the people who issued dire warnings about Trump and now work for him. Either their words carry no epistemic content, or these people are consistently and dramatically wrong in their judgment.

ETA: I don't want to discourage people from updating on evidence or making peace with your enemy. Great! This should be encouraged!
🀑3πŸ‘2
😁25πŸ”₯3🀣3πŸ’©2🀑2🀬1πŸ₯±1
What you see here is fully autonomous, 1x speed, run on the exact same model.

GENE-26.5 from Genesis AI can cook in an unsimplified, real-world setting with more than 20 subtasks. It can do laboratory experiments with mm-level precision and complex tool usage.

Read more: https://www.genesis.ai/blog/gene-26-5-advancing-robotic-manipulation-to-human-level
πŸ‘4
Media is too big
VIEW IN TELEGRAM
GPT-Realtime-2 in the API: OpenAI's most intelligent voice model yet, bringing GPT-5-class reasoning to voice agents.

More: https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api/
πŸ‘2
Getting ready for the big parade.
😁24🀑4πŸ’©1
Germany's Dornier DAR (1980s)
-> Israel's IAI Harpy
-> Iran's Shahed-136
-> Russia's Geran-2
-> American innovation
😁18
Anyone who has been following AI development and the LessWrong sphere for more than 20 years knows that these people are not hyping when they say that AI might pose an existential risk. For everyone else, here is a data point from the OpenAI vs. Musk trial. A private conversation between Google DeepMind co-founder Demis Hassabis and Elon Musk.

Previous anonymous reports (2024):
- https://openai.com/index/openai-elon-musk/
- https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-email-archives-from-musk-v-altman

Identification of the original sender as Demis Hassabis (2026):
- https://www.theverge.com/ai-artificial-intelligence/923518/musk-altman-trial-openai-demis-hassabis-google-deepmind

The Scott Alexander post linked in the email:
- https://slatestarcodex.com/2015/12/17/should-ai-be-open/
🀑6πŸ‘3πŸ’©2
Media is too big
VIEW IN TELEGRAM
Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations https://www.lesswrong.com/posts/oeYesesaxjzMAktCM/natural-language-autoencoders-produce-unsupervised
🀣2❀1πŸ‘1🀑1πŸ₯±1πŸ‘€1
Media is too big
VIEW IN TELEGRAM
Figure taught two robots to make a bed together - fully autonomous: https://www.figure.ai/news/helix-02-bedroom-tidy

Helix-02 running simultaneously on 2 robots, fully onboard, doing a full bedroom reset from pixels-to-actions.

There's no explicit messaging between these robots, they coordinate their actions fully visually, e.g. head nods.

1x speed, fully autonomous, no teleop.
πŸ†’4πŸ’©1
DeepMind achieves 47.9% on FrontierMath T4, up from GPT-5.5 Pro’s previous SoTA score of 39.6%. Nine months ago, the best system achieved 6%.

T4 consists of research-level math problems above PhD qualifying/Olympiad difficulty. All solutions are private and therefore not in the training data.

How? They orchestrate AI agents around the workflow of real mathematicians. A project coordinator agent talks to the user, clarifies the research question, breaks it into goals, and delegates to parallel workstream coordinators. These can in turn call specialized sub-agents for literature review, coding, proof attempts, computational searches, and review. The system uses a shared workspace, internal messaging, version history, and persistent files, so the project has memory across many steps instead of being a transient chat.

Paper: https://arxiv.org/abs/2605.06651
πŸ‘8🀑4
Fields Medalist Timothy Gowers tries GPT-5.5 Pro:

...if AI mathematics continues to progress at anything like its current rate -- which is what I expect to happen -- then we will face a crisis very soon...

Read his full report: https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/
πŸ”₯8πŸ₯±2
Imagine telling Mikhail Gorbachev in 1986 that, forty years later, Donald Trump would announce a three-day ceasefire so Moscow could hold its big annual military parade. Except there would be no tanks, no missiles, no hardware at all. Instead, North Korean soldiers would march through Red Square while the announcer praised them for helping β€œliberate” the Kursk region from β€œneo-Nazi invaders.”
😁19πŸ₯±4πŸ‘2😒1🀣1
✍3
With Fields Medalists now saying that the latest AI systems are useful for research-level math, I want to remind everybody that this was predicted by Scott Alexander's famous 2019 post about GPT-2.[1][2]

GPT-2 could not count past five without making mistakes. But the very fact that it could count to five was astonishing. He called GPT-2 a step toward general intelligence.

I invite you to think about AI systems today in a similar way. Don't let their shortcomings make you dismissive. Be amazed by what they can already do and extrapolate from there.

There are two types of people in the world these days. Those who believe in straight lines on log graphs, and those who don't.

-- tautologer

P.S. Remember that we are far past the pure LLM era. Modern AI systems use LLMs as intuition modules, pruning the search space. They are just one part of orchestrated AI agents with memory, grounded in real-world feedback loops by verifiers and equipped with search and evolutionary algorithms.[3][4] And these systems have barely reached the MS-DOS level of what is possible.

[1] https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/
[2] https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/
[3] https://arxiv.org/abs/2605.06651
[4| https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
πŸ‘6🀑3πŸ€”2πŸ™1
This media is not supported in your browser
VIEW IN TELEGRAM
New video model from Google: Gemini Omni

It's real: https://gemini.google.com/share/7d5dc678c80a

via X/chetaslua
πŸ’©5πŸ”₯2
Mythos found one (1) vulnerability in curl - an open-source software product with an installed base of 20 billion instances: https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
🀑8πŸ”₯2
Media is too big
VIEW IN TELEGRAM
Unitree Unveils: GD01, A Manned Transformable Mecha
❀5πŸ₯±2
Thread of deep reasons behind simple facts: https://x.com/anderssandberg/status/2053757849918939364

Many β€œseparate” theorems are the same structure seen in different clothes: Stokes unifies integration theorems; RG explains why Gaussians are universal; diagonal/fixed-point arguments unify major impossibility theorems; Noether turns symmetry into conservation; Legendre duality connects mechanics, thermodynamics, and optimization; exponentials are the Lie-theoretic bridge from infinitesimal addition to finite multiplication.
πŸ”₯3🀑1