You can check out the full discussion here with links to listen on Youtube, Spotify and your favorite podcast app: https://x.com/MuseumofCrypto/status/1946242841757229158
๐3
Launch a B200, send us a screeshot on X and get 100 free gpu hours. 3x 100 hours left.
https://x.com/comput3ai/status/1949401020477407586
https://x.com/comput3ai/status/1949401020477407586
๐ฅ5โค1
This week in Comput3.ai:
Friday: shipped c3-llamacpp to run and download big models
Saturday: added unirig and rembg
Sunday: added B200s
Monday: added and tested Qwen 3 Coder and Kimi K2
Tuesday: new website
Wednesday: got claude code running on our B200s
Thursday: Stream with ElizaOS
Friday: Vibe code stream on our B200s.
Mic drop ๐ค $COM ๐
https://x.com/comput3ai/status/1950591692681228659
Friday: shipped c3-llamacpp to run and download big models
Saturday: added unirig and rembg
Sunday: added B200s
Monday: added and tested Qwen 3 Coder and Kimi K2
Tuesday: new website
Wednesday: got claude code running on our B200s
Thursday: Stream with ElizaOS
Friday: Vibe code stream on our B200s.
Mic drop ๐ค $COM ๐
https://x.com/comput3ai/status/1950591692681228659
X (formerly Twitter)
Comput3 AI ๐ (@comput3ai) on X
Friday: shipped c3-llamacpp to run and download big models
Saturday: added unirig and rembg
Sunday: added B200s
Monday: added and tested Qwen 3 Coder and Kimi K2
Tuesday: new website
Wednesday: got claude code running on our B200s
Thursday: Stream with ElizaOSโฆ
Saturday: added unirig and rembg
Sunday: added B200s
Monday: added and tested Qwen 3 Coder and Kimi K2
Tuesday: new website
Wednesday: got claude code running on our B200s
Thursday: Stream with ElizaOSโฆ
๐ฅ3
The biggest news by far was this: we got Kimi K2 and Qwen 3 Coder running on our infrastructure. How? We got B200s this week. These models are huge and barely fit. Who else has B200s that just launched a couple of months ago? xAI, Anthropic, OpenAI. Yes' we're part of this same league now. No other crypto or web3 AI project currently has B200s. It's that big a deal.
Subsequently we got the best agentic coding solution (vibe coding) - claude code running and using OUR infrastructure instead of anthropics. Mark your calendars. July 30, 2025. This was a huge achievement for everyone in crypto, AI and open source AGI. This was the sputnik moment. We went to space.
https://x.com/comput3ai/status/1950577750302953701
Subsequently we got the best agentic coding solution (vibe coding) - claude code running and using OUR infrastructure instead of anthropics. Mark your calendars. July 30, 2025. This was a huge achievement for everyone in crypto, AI and open source AGI. This was the sputnik moment. We went to space.
https://x.com/comput3ai/status/1950577750302953701
๐ฏ4๐ฅ2
Friendly reminder and public service announcement: you want access to this tech, you want access to these coding models? Easy. CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom. Buy 1M $COM and stake it to get access to our inferencing API and 1000 GPU hours every month. ๐๐ฅ3โคโ๐ฅ2
Happy to see we are not alone, io.net launched Training as a Service. This is a great idea and something we ourselves have been working on internally and are looking to release in the coming months. Did you know users train models on OpenAI that you can only host on OpenAI? How crazy is that? You pay for them to make a model for you so you can only go to them to use it? We've been working on all types of model training internally. We've set numerous milestones in terms of how many hundreds of GPUs we connect to each training run. And we are absolutely confident it's going be a huge part of our portfolio going forward, likely the biggest. We have a couple tricks up our sleeves that will make us the place to go to. Watch this space.
https://x.com/comput3ai/status/1950812363503886639?t=aX2HKfKiidkqbycPHFJSkw&s=19
https://x.com/comput3ai/status/1950812363503886639?t=aX2HKfKiidkqbycPHFJSkw&s=19
X (formerly Twitter)
Comput3 AI ๐ (@comput3ai) on X
Happy to see we are not alone, this is a great idea and part of a healthy trend. Did you know users train models on OpenAI that you can only host on OpenAI? We've been working on model training internally and we think it's going be a huge part of our portfolioโฆ
๐ฅ3
Our aim is to be the place to host, train and validate your models, not just for web3 but anywhere. These will be our 3 core directions. Right now our users and stakers can already benefit from our model hosting which is what we went to market with. We just launched but it's already time for a major upgrade:
1. Claude code compatible APIs for models
2. Rendering apis for images and video, you can do this to some degree already, but you have to connect to your host directly. We will abstract this away.
3. MCP servers to handle everything our API can do.
4. Adding Qwen 3 coder / Kimi K2 these will be premium models
5. We will simplify the tagging and model access, i.e., premium will have the premium models, hermes etc will be lower tiers.
6. Simpler "chat" ui to use comput3. So that will all run off of this api
7. We are going to add credit card payments and evm payments.
8. With the evm payments we will hopefully be on-boarding projects and announcing collabs outside of solana. But staking will remain on solana for the time being.
9. For fiat payments we'll offer monthly "subscriptions" which will be the fiat version of staking. But we'll also sell input/output tokens for the models
10. The hosting provider for the api and your data will be in Germany (Hetzner) the gpus will be in secure datacenters across the world, but the way we run things they do not retain or log your data. So our goal is to be gpdr/eu privacy compliant as well for those that want to use us in Web1/Web2
11. If anyone wants to use us with your employer we will be offering b2b volume discounts for anyone who wants to on board an entire team for example. We won't train on the data, you're not the product.
Open source is the way. Follow the infra. $COM CA:
1. Claude code compatible APIs for models
2. Rendering apis for images and video, you can do this to some degree already, but you have to connect to your host directly. We will abstract this away.
3. MCP servers to handle everything our API can do.
4. Adding Qwen 3 coder / Kimi K2 these will be premium models
5. We will simplify the tagging and model access, i.e., premium will have the premium models, hermes etc will be lower tiers.
6. Simpler "chat" ui to use comput3. So that will all run off of this api
7. We are going to add credit card payments and evm payments.
8. With the evm payments we will hopefully be on-boarding projects and announcing collabs outside of solana. But staking will remain on solana for the time being.
9. For fiat payments we'll offer monthly "subscriptions" which will be the fiat version of staking. But we'll also sell input/output tokens for the models
10. The hosting provider for the api and your data will be in Germany (Hetzner) the gpus will be in secure datacenters across the world, but the way we run things they do not retain or log your data. So our goal is to be gpdr/eu privacy compliant as well for those that want to use us in Web1/Web2
11. If anyone wants to use us with your employer we will be offering b2b volume discounts for anyone who wants to on board an entire team for example. We won't train on the data, you're not the product.
Open source is the way. Follow the infra. $COM CA:
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom ๐โค6๐ฅ1
Going live with ElizaOS here in a bit will do a quick ElizaOS demo and then hang out for Q&A. Join us! on their discord for questions. https://www.youtube.com/watch?v=S0lHFzfvgx8
โค3๐ฅ3๐1
What an epic hack. One of our partners, zeroone will be giving away $COM to artists so that they have sufficient GPU hours. An absolute sense of community. Only on @zero____one and @comput3ai. Here's to many more collaborations and many more examples of building together.
https://x.com/zero____one/status/1951026135015538886?t=MGMjjPeg6a7tuo82EbWYBg
https://x.com/zero____one/status/1951026135015538886?t=MGMjjPeg6a7tuo82EbWYBg
๐6๐ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
Confirmed working - claude code running on Kimi K2 on @comput3ai's B200s. In this video we're redirecting Claude to our servers. This is full context length @Q8. The biggest spend for most startups today is coding models. Whether thats cursor or claude code. We can now self host this. Let that sink in. We can host something that engineers worldwide spend $100s per month on. And we can offer it to subscribers and $COM stakers at a fraction of the price. All of this is quickly becoming the most critical spend for any startup.
Mark your calendars. Will live stream tonight. Noon eastern. We will play around with this.
The ticker is $COM. The CA is
https://x.com/comput3ai/status/1951207757463388200
Mark your calendars. Will live stream tonight. Noon eastern. We will play around with this.
The ticker is $COM. The CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom.https://x.com/comput3ai/status/1951207757463388200
๐ฅ7๐ฏ4
Still hard to believe we're the only ones hosting the latest open-source SOTA models on B200s in the world. Full stop. Last week it felt like we went to space. We think the stuff planned for the next couple of weeks will feel like a lunar landing. ๐
What does our distributed AI network look like? Right now we're serving GPUs on launch.comput3.ai from ๐บ๐ธ๐ธ๐ช๐ง๐ท๐ฎ๐ณ๐ฆ๐ช. B200s, H200s, H100s, L40S, 4090 48GB. 5 countries. 4 continents. 5 types of GPUs.
We have big things in store. We're just getting started. We think we can dominate in every vertical we touch. We'll be launching access with fiat payments for web2 and "normies" as early as next week.
ABC - Always be cooking.
$COM
What does our distributed AI network look like? Right now we're serving GPUs on launch.comput3.ai from ๐บ๐ธ๐ธ๐ช๐ง๐ท๐ฎ๐ณ๐ฆ๐ช. B200s, H200s, H100s, L40S, 4090 48GB. 5 countries. 4 continents. 5 types of GPUs.
We have big things in store. We're just getting started. We think we can dominate in every vertical we touch. We'll be launching access with fiat payments for web2 and "normies" as early as next week.
ABC - Always be cooking.
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom๐ฅ10
This media is not supported in your browser
VIEW IN TELEGRAM
Yesterday we did our live stream using c3-llama-cpp (https://github.com/comput3ai/c3-llamacpp). We've now replaced that with the final boss of high performance LLM APIs (https://github.com/comput3ai/c3-vllm).
How much of a difference did it make? 200-350 token/s yesterday. Now we're getting 2000-3000 token/s. that's roughly 8x. It's insanely fast. We'll be beta testing this next week, and properly launching it by the end of next week.
ABC - Always be cooking.
$COM
How much of a difference did it make? 200-350 token/s yesterday. Now we're getting 2000-3000 token/s. that's roughly 8x. It's insanely fast. We'll be beta testing this next week, and properly launching it by the end of next week.
ABC - Always be cooking.
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom๐ฅ6๐ฏ1
Every morning when we wake up. We check our clusters. We see how our B200s are doing. Then you end up taking a moment to appreciate no one else is currently running B200s. That's when you know its going to be a good day! โ๏ธ
Here' another true story: we recently had a fintech client tell us they love everything we do, but they can't show our website to their manager. Problem solved: we'll be integrating a check if to see if you have phantom or metamask installed. If you don't, we will forward you to a white version website with stock photography of office buildings. We're now compatible with web2 โ
$COM
Here' another true story: we recently had a fintech client tell us they love everything we do, but they can't show our website to their manager. Problem solved: we'll be integrating a check if to see if you have phantom or metamask installed. If you don't, we will forward you to a white version website with stock photography of office buildings. We're now compatible with web2 โ
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom ๐๐7๐ฅ5โค3๐1
People are noticing all the hardwork that @nedos et al are putting in. Its a lot of work. And unchartered waters. Did you know B200s are Blackwell based? That's a new gpu architectures. That's the B in B200. 5090s are also blackwell. Well, blackwell is way faster but it also broke a lot with its architectural changes.
https://x.com/reneil1337/status/1952303769020211305
https://x.com/reneil1337/status/1952303769020211305
X (formerly Twitter)
Reneil (@reneil1337) on X
Some founders are built different @nedos pushing boundaries with @comput3ai ๐ณ๏ธ๐
๐6๐4
Then we promptly committed these changes as open source and contributed back to the community. Now anyone with B200s can run this. Admittedly theyre not that easy to get. https://github.com/comput3ai/c3-vllm
GitHub
GitHub - comput3ai/c3-vllm
Contribute to comput3ai/c3-vllm development by creating an account on GitHub.
Don't have any B200s or dont feel like running them? That's okay too. We have some.
The token is $COM. The CA is
The token is $COM. The CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcomโค6