The Triangle of Everything: https://commons.wikimedia.org/wiki/File:Triangle_of_everything_simplified_2_triangle_of_everything_-_Planck_Units.png
🔥4👍2💩1
As I wrote before, the only way to turn the ship around at this point is to immediately pump €1 trillion into a Manhattan Project for a European competitive AI model and another trillion into a next-generation nuclear buildup.
Obviously, it won't happen. It's over. The only question is now whether an American like Trump will shape the future or an American like AOC. But, without some miracle, it won't be Europe. China still has a chance, though.
Obviously, it won't happen. It's over. The only question is now whether an American like Trump will shape the future or an American like AOC. But, without some miracle, it won't be Europe. China still has a chance, though.
👍12🤣4🤡3
For comparison, Russia has spent roughly $600 billion since 2022 on one of the largest conventional interstate wars since 1945. Morgan Stanley estimates that U.S. big tech hyperscalers' capital expenditures will total $800 billion in 2026.
Now let's look at Europe. Mistral AI, Europe's only serious general-purpose AI lab, will spend about €1-3 billion in capex in 2026 ($1.2–3.5 billion at current exchange rates). All of European AI-lab capex together is roughly in the €10-15 billion ballpark in 2026.
Now let's look at Europe. Mistral AI, Europe's only serious general-purpose AI lab, will spend about €1-3 billion in capex in 2026 ($1.2–3.5 billion at current exchange rates). All of European AI-lab capex together is roughly in the €10-15 billion ballpark in 2026.
😭7🥱4
This media is not supported in your browser
VIEW IN TELEGRAM
It looks like the economists were right after all: even in a world ruled by superintelligences, monkeys will still have a job.
Also, note that they censored the monkey's tits.
Also, note that they censored the monkey's tits.
😁11🥰3❤1🤣1🖕1
They are actually underspending. Whoever controls AI controls the universe.
https://www.ft.com/content/ce8a1b9d-1427-472f-9585-294c7af2e0fb?syn-25a6b1a6=1
https://www.ft.com/content/ce8a1b9d-1427-472f-9585-294c7af2e0fb?syn-25a6b1a6=1
🤡4🔥2😁1
Anthropic seems fully committed to winning this race: https://www.anthropic.com/news/higher-limits-spacex
The most interesting dynamic here is how competitors in the AI race, Google and Musk, are selling compute to Anthropic.
The logic here is probably that Anthropic would be getting the compute anyway, and they'd rather be the ones selling/controlling it. Another angle is that they're effectively getting a paying customer to bear the cost of debugging their hardware platform.
Alphabet also has equity in Anthropic. So even if Anthropic wins, they might benefit. If Anthropic loses but spends the $200B on TPUs first, Google still wins. Google is positioned to profit from a wider range of outcomes than if it bet purely on Gemini.
Then there is the circular revenue mechanic. The capital that Anthropic raises from Google and Nvidia directly flows back to buy compute from them, which helps their valuation, which in turn funds more capex. The cash is largely round-tripping between investors, labs, and clouds.
The most interesting dynamic here is how competitors in the AI race, Google and Musk, are selling compute to Anthropic.
The logic here is probably that Anthropic would be getting the compute anyway, and they'd rather be the ones selling/controlling it. Another angle is that they're effectively getting a paying customer to bear the cost of debugging their hardware platform.
Alphabet also has equity in Anthropic. So even if Anthropic wins, they might benefit. If Anthropic loses but spends the $200B on TPUs first, Google still wins. Google is positioned to profit from a wider range of outcomes than if it bet purely on Gemini.
Then there is the circular revenue mechanic. The capital that Anthropic raises from Google and Nvidia directly flows back to buy compute from them, which helps their valuation, which in turn funds more capex. The cash is largely round-tripping between investors, labs, and clouds.
🤯5🤡4😴3
I still find it hard to deal with this level of time inconsistency. For example, consider all the people who issued dire warnings about Trump and now work for him. Either their words carry no epistemic content, or these people are consistently and dramatically wrong in their judgment.
ETA: I don't want to discourage people from updating on evidence or making peace with your enemy. Great! This should be encouraged!
ETA: I don't want to discourage people from updating on evidence or making peace with your enemy. Great! This should be encouraged!
🤡3👍2
What you see here is fully autonomous, 1x speed, run on the exact same model.
GENE-26.5 from Genesis AI can cook in an unsimplified, real-world setting with more than 20 subtasks. It can do laboratory experiments with mm-level precision and complex tool usage.
Read more: https://www.genesis.ai/blog/gene-26-5-advancing-robotic-manipulation-to-human-level
GENE-26.5 from Genesis AI can cook in an unsimplified, real-world setting with more than 20 subtasks. It can do laboratory experiments with mm-level precision and complex tool usage.
Read more: https://www.genesis.ai/blog/gene-26-5-advancing-robotic-manipulation-to-human-level
👍4
Media is too big
VIEW IN TELEGRAM
GPT-Realtime-2 in the API: OpenAI's most intelligent voice model yet, bringing GPT-5-class reasoning to voice agents.
More: https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api/
More: https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api/
👍2
Anyone who has been following AI development and the LessWrong sphere for more than 20 years knows that these people are not hyping when they say that AI might pose an existential risk. For everyone else, here is a data point from the OpenAI vs. Musk trial. A private conversation between Google DeepMind co-founder Demis Hassabis and Elon Musk.
Previous anonymous reports (2024):
- https://openai.com/index/openai-elon-musk/
- https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-email-archives-from-musk-v-altman
Identification of the original sender as Demis Hassabis (2026):
- https://www.theverge.com/ai-artificial-intelligence/923518/musk-altman-trial-openai-demis-hassabis-google-deepmind
The Scott Alexander post linked in the email:
- https://slatestarcodex.com/2015/12/17/should-ai-be-open/
Previous anonymous reports (2024):
- https://openai.com/index/openai-elon-musk/
- https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/openai-email-archives-from-musk-v-altman
Identification of the original sender as Demis Hassabis (2026):
- https://www.theverge.com/ai-artificial-intelligence/923518/musk-altman-trial-openai-demis-hassabis-google-deepmind
The Scott Alexander post linked in the email:
- https://slatestarcodex.com/2015/12/17/should-ai-be-open/
🤡6👍3💩2
Media is too big
VIEW IN TELEGRAM
Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations https://www.lesswrong.com/posts/oeYesesaxjzMAktCM/natural-language-autoencoders-produce-unsupervised
🤣2❤1👍1🤡1🥱1👀1
Media is too big
VIEW IN TELEGRAM
Figure taught two robots to make a bed together - fully autonomous: https://www.figure.ai/news/helix-02-bedroom-tidy
Helix-02 running simultaneously on 2 robots, fully onboard, doing a full bedroom reset from pixels-to-actions.
There's no explicit messaging between these robots, they coordinate their actions fully visually, e.g. head nods.
1x speed, fully autonomous, no teleop.
Helix-02 running simultaneously on 2 robots, fully onboard, doing a full bedroom reset from pixels-to-actions.
There's no explicit messaging between these robots, they coordinate their actions fully visually, e.g. head nods.
1x speed, fully autonomous, no teleop.
🆒4💩1