What an epic hack. One of our partners, zeroone will be giving away $COM to artists so that they have sufficient GPU hours. An absolute sense of community. Only on @zero____one and @comput3ai. Here's to many more collaborations and many more examples of building together.
https://x.com/zero____one/status/1951026135015538886?t=MGMjjPeg6a7tuo82EbWYBg
https://x.com/zero____one/status/1951026135015538886?t=MGMjjPeg6a7tuo82EbWYBg
๐6๐ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
Confirmed working - claude code running on Kimi K2 on @comput3ai's B200s. In this video we're redirecting Claude to our servers. This is full context length @Q8. The biggest spend for most startups today is coding models. Whether thats cursor or claude code. We can now self host this. Let that sink in. We can host something that engineers worldwide spend $100s per month on. And we can offer it to subscribers and $COM stakers at a fraction of the price. All of this is quickly becoming the most critical spend for any startup.
Mark your calendars. Will live stream tonight. Noon eastern. We will play around with this.
The ticker is $COM. The CA is
https://x.com/comput3ai/status/1951207757463388200
Mark your calendars. Will live stream tonight. Noon eastern. We will play around with this.
The ticker is $COM. The CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom.https://x.com/comput3ai/status/1951207757463388200
๐ฅ7๐ฏ4
Still hard to believe we're the only ones hosting the latest open-source SOTA models on B200s in the world. Full stop. Last week it felt like we went to space. We think the stuff planned for the next couple of weeks will feel like a lunar landing. ๐
What does our distributed AI network look like? Right now we're serving GPUs on launch.comput3.ai from ๐บ๐ธ๐ธ๐ช๐ง๐ท๐ฎ๐ณ๐ฆ๐ช. B200s, H200s, H100s, L40S, 4090 48GB. 5 countries. 4 continents. 5 types of GPUs.
We have big things in store. We're just getting started. We think we can dominate in every vertical we touch. We'll be launching access with fiat payments for web2 and "normies" as early as next week.
ABC - Always be cooking.
$COM
What does our distributed AI network look like? Right now we're serving GPUs on launch.comput3.ai from ๐บ๐ธ๐ธ๐ช๐ง๐ท๐ฎ๐ณ๐ฆ๐ช. B200s, H200s, H100s, L40S, 4090 48GB. 5 countries. 4 continents. 5 types of GPUs.
We have big things in store. We're just getting started. We think we can dominate in every vertical we touch. We'll be launching access with fiat payments for web2 and "normies" as early as next week.
ABC - Always be cooking.
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom๐ฅ10
This media is not supported in your browser
VIEW IN TELEGRAM
Yesterday we did our live stream using c3-llama-cpp (https://github.com/comput3ai/c3-llamacpp). We've now replaced that with the final boss of high performance LLM APIs (https://github.com/comput3ai/c3-vllm).
How much of a difference did it make? 200-350 token/s yesterday. Now we're getting 2000-3000 token/s. that's roughly 8x. It's insanely fast. We'll be beta testing this next week, and properly launching it by the end of next week.
ABC - Always be cooking.
$COM
How much of a difference did it make? 200-350 token/s yesterday. Now we're getting 2000-3000 token/s. that's roughly 8x. It's insanely fast. We'll be beta testing this next week, and properly launching it by the end of next week.
ABC - Always be cooking.
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom๐ฅ6๐ฏ1
Every morning when we wake up. We check our clusters. We see how our B200s are doing. Then you end up taking a moment to appreciate no one else is currently running B200s. That's when you know its going to be a good day! โ๏ธ
Here' another true story: we recently had a fintech client tell us they love everything we do, but they can't show our website to their manager. Problem solved: we'll be integrating a check if to see if you have phantom or metamask installed. If you don't, we will forward you to a white version website with stock photography of office buildings. We're now compatible with web2 โ
$COM
Here' another true story: we recently had a fintech client tell us they love everything we do, but they can't show our website to their manager. Problem solved: we'll be integrating a check if to see if you have phantom or metamask installed. If you don't, we will forward you to a white version website with stock photography of office buildings. We're now compatible with web2 โ
$COM
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom ๐๐7๐ฅ5โค3๐1
People are noticing all the hardwork that @nedos et al are putting in. Its a lot of work. And unchartered waters. Did you know B200s are Blackwell based? That's a new gpu architectures. That's the B in B200. 5090s are also blackwell. Well, blackwell is way faster but it also broke a lot with its architectural changes.
https://x.com/reneil1337/status/1952303769020211305
https://x.com/reneil1337/status/1952303769020211305
X (formerly Twitter)
Reneil (@reneil1337) on X
Some founders are built different @nedos pushing boundaries with @comput3ai ๐ณ๏ธ๐
๐6๐4
Then we promptly committed these changes as open source and contributed back to the community. Now anyone with B200s can run this. Admittedly theyre not that easy to get. https://github.com/comput3ai/c3-vllm
GitHub
GitHub - comput3ai/c3-vllm
Contribute to comput3ai/c3-vllm development by creating an account on GitHub.
Don't have any B200s or dont feel like running them? That's okay too. We have some.
The token is $COM. The CA is
The token is $COM. The CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcomโค6
Some projects are just built differently. 20 min turnaround on this feature request. https://github.com/comput3ai/c3-docker-images/commit/5989a88a2086fc432845cdb7dd8c465ee63f338c
๐ฅ3๐2๐ฏ2โค1
This media is not supported in your browser
VIEW IN TELEGRAM
Qwen 3 Coder 30B is already very impressive. Expectations are high for the 480B. Try one right now https://launch.comput3.ai
๐ฅ3๐2๐ฏ2โค1
Forwarded from Aya Hackathon Channel
๐ฎ๐ณ Aya AI Hackathon Bangalore ๐ฎ๐ณ
Going live now! ๐ฅ
Aya AI Hackathon x Comput3 Livestream โ dive into MCP builds, free GPU hours & more.
๐ https://youtube.com/live/7CbfBiQvhv0?feature=share
Bring your questions, grab some tips, and level up your #AyaAIHackathon project.
Going live now! ๐ฅ
Aya AI Hackathon x Comput3 Livestream โ dive into MCP builds, free GPU hours & more.
๐ https://youtube.com/live/7CbfBiQvhv0?feature=share
Bring your questions, grab some tips, and level up your #AyaAIHackathon project.
๐ฅ2
Decided to share this with everyone already so no one is surprised when we start posting about it. When we launched COMPUT3 we pronounced it "compute" the idea was to fair launch on @autodotfun and give a subtle hint at what we're building. We thought there should be a web3 native compute platform thats build completely differently and from the ground up.
Thanks to your incredible support, our platform, capabilities, infrastructure, and especially our community, have grown by leaps and bounds, far beyond our wildest dreams. We didn't know what to expect, but we're truly humbled and grateful for where we are right now! None of this would be possible without you.
We posted this earlier this week, and, sadly, it's not a joke. This really happened: https://x.com/comput3ai/status/1952240139390066721?t=Is16AhDZsog1sZX_aMNagg&s=19
The feedback we get from non-web3 companies is that our branding doesn't appeal to them. We're a token that's all about community, and we do this for you guys at least as much as we do it for ourselves. We are working to build a platform where AI agents can themselves book compute. That has to be in crypto. But these non-Web3 folks just want to use our tech; they don't care how cyberpunk our website looks. In fact, it's a reason for them to look away.
All of that is to say, we're announcing today that we're launching compute3.ai (with an 'e') as the corporate face of comput3.ai the token. We'll use a unified tech stack, and we'll make sure anything we build for web2 is available for web3. You can also follow us @compute3ai - we appreciate every one of you who joins us there.
We know our roots. We're ๐ all the way. But to facilitate revenue outside of Web3, this is a necessary step in our growth, and we're so thankful for your understanding and enthusiasm as we evolve. It feels like we've grown up, together. And we're just getting started.
All the Web3 content, live streams, and fun will stay here, but things you can buy and pay for in fiat will be over at @compute3ai. Thank you for being part of this journey!
Thanks to your incredible support, our platform, capabilities, infrastructure, and especially our community, have grown by leaps and bounds, far beyond our wildest dreams. We didn't know what to expect, but we're truly humbled and grateful for where we are right now! None of this would be possible without you.
We posted this earlier this week, and, sadly, it's not a joke. This really happened: https://x.com/comput3ai/status/1952240139390066721?t=Is16AhDZsog1sZX_aMNagg&s=19
The feedback we get from non-web3 companies is that our branding doesn't appeal to them. We're a token that's all about community, and we do this for you guys at least as much as we do it for ourselves. We are working to build a platform where AI agents can themselves book compute. That has to be in crypto. But these non-Web3 folks just want to use our tech; they don't care how cyberpunk our website looks. In fact, it's a reason for them to look away.
All of that is to say, we're announcing today that we're launching compute3.ai (with an 'e') as the corporate face of comput3.ai the token. We'll use a unified tech stack, and we'll make sure anything we build for web2 is available for web3. You can also follow us @compute3ai - we appreciate every one of you who joins us there.
We know our roots. We're ๐ all the way. But to facilitate revenue outside of Web3, this is a necessary step in our growth, and we're so thankful for your understanding and enthusiasm as we evolve. It feels like we've grown up, together. And we're just getting started.
All the Web3 content, live streams, and fun will stay here, but things you can buy and pay for in fiat will be over at @compute3ai. Thank you for being part of this journey!
X (formerly Twitter)
Comput3 AI ๐ (@comput3ai) on X
A very serious web2 client told us they love everything we do, but they can't show our website to their manager. Problem solved: we'll now check if you have phantom or metamask installed and then forward you to a white version website with stock photographyโฆ
๐ฅ12
OpenAI is pitching gpt-oss as their edge model. What's edge? It means it can run locally on your machine not on their cloud. This means more things can run these models. They're more compact and optimized.
But people forget models have to be trained and what are they trained on? B200s and H200s. Training is always most efficient on the biggest hardware available. Always.
https://developer.nvidia.com/blog/delivering-1-5-m-tps-inference-on-nvidia-gb200-nvl72-nvidia-accelerates-openai-gpt-oss-models-from-cloud-to-edge/
Right now most distributed and decentralized training networks on Web3 are falling back to single consumer 3090s and 4090s. This makes sense while they're on testnet. Side note, Psyche runs on H100s on testnet, which puts them in a different league. But what happens when all these networks go to main net? What happens when they want to train real models? Where will they get B200s, H200s, H100s?
WHERE ARE THE GPUS, LEBOWSKI?!
Only on $COM ๐
CA is
But people forget models have to be trained and what are they trained on? B200s and H200s. Training is always most efficient on the biggest hardware available. Always.
https://developer.nvidia.com/blog/delivering-1-5-m-tps-inference-on-nvidia-gb200-nvl72-nvidia-accelerates-openai-gpt-oss-models-from-cloud-to-edge/
Right now most distributed and decentralized training networks on Web3 are falling back to single consumer 3090s and 4090s. This makes sense while they're on testnet. Side note, Psyche runs on H100s on testnet, which puts them in a different league. But what happens when all these networks go to main net? What happens when they want to train real models? Where will they get B200s, H200s, H100s?
WHERE ARE THE GPUS, LEBOWSKI?!
Only on $COM ๐
CA is
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcomNVIDIA Technical Blog
Delivering 1.5 M TPS Inference on NVIDIA GB200 NVL72, NVIDIA Accelerates OpenAI gpt-oss Models from Cloud to Edge
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA DGX back in 2016. The collaborative AI innovation continues with the OpenAI gpt-oss-20b and gpt-oss-120b launch.
๐ฅ6
The world's fastest gpus for AI training.
Large single gpus for large models with large contexts.
Models you can run locally on your data.
ARE YOU GETTING IT?
https://openai.com/index/gpt-oss-model-card/
๐ $COM CA:
Large single gpus for large models with large contexts.
Models you can run locally on your data.
ARE YOU GETTING IT?
https://openai.com/index/gpt-oss-model-card/
๐ $COM CA:
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcomOpenai
gpt-oss-120b & gpt-oss-20b Model Card
We introduce gpt-oss-120b and gpt-oss-20b, two open-weight reasoning models available under the Apache 2.0 license and our gpt-oss usage policy.
๐ฅ2
This media is not supported in your browser
VIEW IN TELEGRAM
We had to enlist Steve to explain this to you. ARE YOU GETTING IT?
This was built using open source models and rendered on our gpu network. Try it out right now through our bot ๐ https://t.iss.one/C3PortraitBot
Support us in what we're buidling and get GPU hours for workflows like this every month by staking $COM.
๐ $COM CA:
This was built using open source models and rendered on our gpu network. Try it out right now through our bot ๐ https://t.iss.one/C3PortraitBot
Support us in what we're buidling and get GPU hours for workflows like this every month by staking $COM.
๐ $COM CA:
J3NrhzUeKBSA3tJQjNq77zqpWJNz3FS9TrX7H7SLKcom๐ฅ5๐คฃ1