Axis of Ordinary
3.72K subscribers
4.34K photos
1.22K videos
6 files
5.34K links
Memetic and cognitive hazards.

Substack: https://axisofordinary.substack.com/
Download Telegram
As I wrote before, the only way to turn the ship around at this point is to immediately pump €1 trillion into a Manhattan Project for a European competitive AI model and another trillion into a next-generation nuclear buildup.

Obviously, it won't happen. It's over. The only question is now whether an American like Trump will shape the future or an American like AOC. But, without some miracle, it won't be Europe. China still has a chance, though.
πŸ‘12🀣4🀑3
For comparison, Russia has spent roughly $600 billion since 2022 on one of the largest conventional interstate wars since 1945. Morgan Stanley estimates that U.S. big tech hyperscalers' capital expenditures will total $800 billion in 2026.

Now let's look at Europe. Mistral AI, Europe's only serious general-purpose AI lab, will spend about €1-3 billion in capex in 2026 ($1.2–3.5 billion at current exchange rates). All of European AI-lab capex together is roughly in the €10-15 billion ballpark in 2026.
😭7πŸ₯±4
The combined $3.75T valuation of SpaceX, OpenAI, and Anthropic exceeds all US dot-com IPOs from 1995-2000 ($3T across ~2,600 companies).

The AI/Space trio's value equals nearly half of all US IPOs from 1946-1994 (~$7.8T over 48 years and ~9,000 companies).

Chart by Paul Kedrosky.
🌚5πŸ”₯2🀑1
This media is not supported in your browser
VIEW IN TELEGRAM
It looks like the economists were right after all: even in a world ruled by superintelligences, monkeys will still have a job.

Also, note that they censored the monkey's tits.
😁11πŸ₯°3❀1🀣1πŸ–•1
They are actually underspending. Whoever controls AI controls the universe.

https://www.ft.com/content/ce8a1b9d-1427-472f-9585-294c7af2e0fb?syn-25a6b1a6=1
🀑4πŸ”₯2😁1
😁7❀3πŸ₯±2🀷2
Anthropic seems fully committed to winning this race: https://www.anthropic.com/news/higher-limits-spacex

The most interesting dynamic here is how competitors in the AI race, Google and Musk, are selling compute to Anthropic.

The logic here is probably that Anthropic would be getting the compute anyway, and they'd rather be the ones selling/controlling it. Another angle is that they're effectively getting a paying customer to bear the cost of debugging their hardware platform.

Alphabet also has equity in Anthropic. So even if Anthropic wins, they might benefit. If Anthropic loses but spends the $200B on TPUs first, Google still wins. Google is positioned to profit from a wider range of outcomes than if it bet purely on Gemini.

Then there is the circular revenue mechanic. The capital that Anthropic raises from Google and Nvidia directly flows back to buy compute from them, which helps their valuation, which in turn funds more capex. The cash is largely round-tripping between investors, labs, and clouds.
🀯5🀑4😴3
I still find it hard to deal with this level of time inconsistency. For example, consider all the people who issued dire warnings about Trump and now work for him. Either their words carry no epistemic content, or these people are consistently and dramatically wrong in their judgment.

ETA: I don't want to discourage people from updating on evidence or making peace with your enemy. Great! This should be encouraged!
🀑3πŸ‘2
😁25πŸ”₯3🀣3πŸ’©2🀑2🀬1πŸ₯±1
What you see here is fully autonomous, 1x speed, run on the exact same model.

GENE-26.5 from Genesis AI can cook in an unsimplified, real-world setting with more than 20 subtasks. It can do laboratory experiments with mm-level precision and complex tool usage.

Read more: https://www.genesis.ai/blog/gene-26-5-advancing-robotic-manipulation-to-human-level
πŸ‘4
Media is too big
VIEW IN TELEGRAM
GPT-Realtime-2 in the API: OpenAI's most intelligent voice model yet, bringing GPT-5-class reasoning to voice agents.

More: https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api/
πŸ‘2
Getting ready for the big parade.
😁24🀑4πŸ’©1
Germany's Dornier DAR (1980s)
-> Israel's IAI Harpy
-> Iran's Shahed-136
-> Russia's Geran-2
-> American innovation
😁18
Anyone who has been following AI development and the LessWrong sphere for more than 20 years knows that these people are not hyping when they say that AI might pose an existential risk. For everyone else, here is a data point from the OpenAI vs. Musk trial. A private conversation between Google DeepMind co-founder Demis Hassabis and Elon Musk.

Previous anonymous reports (2024):
- https://openai.com/index/openai-elon-musk/
- https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-email-archives-from-musk-v-altman

Identification of the original sender as Demis Hassabis (2026):
- https://www.theverge.com/ai-artificial-intelligence/923518/musk-altman-trial-openai-demis-hassabis-google-deepmind

The Scott Alexander post linked in the email:
- https://slatestarcodex.com/2015/12/17/should-ai-be-open/
🀑6πŸ‘3πŸ’©2
Media is too big
VIEW IN TELEGRAM
Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations https://www.lesswrong.com/posts/oeYesesaxjzMAktCM/natural-language-autoencoders-produce-unsupervised
🀣2❀1πŸ‘1🀑1πŸ₯±1πŸ‘€1
Media is too big
VIEW IN TELEGRAM
Figure taught two robots to make a bed together - fully autonomous: https://www.figure.ai/news/helix-02-bedroom-tidy

Helix-02 running simultaneously on 2 robots, fully onboard, doing a full bedroom reset from pixels-to-actions.

There's no explicit messaging between these robots, they coordinate their actions fully visually, e.g. head nods.

1x speed, fully autonomous, no teleop.
πŸ†’4πŸ’©1
DeepMind achieves 47.9% on FrontierMath T4, up from GPT-5.5 Pro’s previous SoTA score of 39.6%. Nine months ago, the best system achieved 6%.

T4 consists of research-level math problems above PhD qualifying/Olympiad difficulty. All solutions are private and therefore not in the training data.

How? They orchestrate AI agents around the workflow of real mathematicians. A project coordinator agent talks to the user, clarifies the research question, breaks it into goals, and delegates to parallel workstream coordinators. These can in turn call specialized sub-agents for literature review, coding, proof attempts, computational searches, and review. The system uses a shared workspace, internal messaging, version history, and persistent files, so the project has memory across many steps instead of being a transient chat.

Paper: https://arxiv.org/abs/2605.06651
πŸ‘8🀑4
Fields Medalist Timothy Gowers tries GPT-5.5 Pro:

...if AI mathematics continues to progress at anything like its current rate -- which is what I expect to happen -- then we will face a crisis very soon...

Read his full report: https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/
πŸ”₯8πŸ₯±2