Good summary: https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/
TL;DR: AI products sell to enterprises 2x better than traditional software, product-led growth FTW, horizontal solutions FTW, likelier boom than bubble.
TL;DR: AI products sell to enterprises 2x better than traditional software, product-led growth FTW, horizontal solutions FTW, likelier boom than bubble.
Menlo Ventures
2025: The State of Generative AI in the Enterprise | Menlo Ventures
For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.
👍2🔥1🥰1
Why Isn’t Erasure Coding Used More Broadly for Guaranteed Delivery?
I keep asking myself this question, and I still do not have a good answer.
Consider email or a typical messenger. Your computer — or an intermediary server — connects to a single server, sends the message, gets a confirmation, and considers the job done. Whether the message ultimately goes through is up to that service.
Yes, this works 99.99% of the time. But there is no true guarantee — and no real accountability. If a mail server delays or holds your message, you have no practical recourse. The protocol allows this ambiguity.
Blockchains offer a contrast. Transactions are confirmed by independent validators and visible via independent indexers. This works, but it comes with trade-offs. MEV attacks allow nodes to delay transactions to front-run others — technically allowed, but undesirable.
Here is the idea: use error-correcting codes to send messages or transactions directly at the client level.
Split a message into, say, 40 pieces, and add 10 redundant ones. Anyone with any 40 of the 50 pieces can reconstruct the original message. The message is encrypted, so servers cannot read it — only route it.
At the client level — browser, app, or terminal — the system randomly selects 50 nodes out of thousands.
This has three effects.
First, it removes the server bottleneck. Instead of sending one large message to one server, you send 1/40th of it — about 2.5% — to each of 50 nodes, across different networks, continents, and routes.
Second, interfering with message routing becomes far harder. Suppressing a message based on content or headers would require controlling at least 11 of the 50 randomly chosen nodes — highly unlikely with thousands of nodes active at any time.
Third, it introduces real accountability. The client receives cryptographic acknowledgements from all 50 nodes confirming receipt. If those nodes continue serving others while failing to route this message, that is unmistakable foul play.
Node operators could be required to stake funds as insurance. By signing a receipt, they attest that if they remain online and fail to route the message, the sender is entitled to compensation — automatically and within seconds.
In a Web3 setting, such a network can be economically self-sufficient. Users might deposit $10, maintain a $5 minimum balance, and pay $0.00001 per message. As the network grows and its token appreciates, early users may find their messaging effectively free.
Free — and secure.
Messages are encrypted. The system is on-chain, public keys are easy to distribute, enabling messages that are both signed and encrypted so only the intended recipient can read them.
“Only the intended recipient” can already mean complex conditions — multisig approvals, hardware-backed keys, or social consensus. A board decision might require
This is mathematically sound, physically executable, and beneficial to everyone — except those who profit from the current lack of accountability. We all want our services to behave this way, right?
One last point.
In this design, all Terms of Service are final. You will never receive an email saying they changed “because we care about our users”, or that you "must accept the new terms to continue using the service".
What you agreed to is guaranteed forever — unless the provider shuts down entirely, in which case compensation can be paid from an escrow account on another network. That compensation should exceed what the user paid — reasonable mathematically, since routing messages is really, really not that hard.
And if the provider wants to upgrade, they must ask. You choose whether to migrate. There is no technical way to force you — and that feels like the right default.
Aside from the overall “it’s already good enough” sentiment, and aside from large players aligning ever more closely with increasingly controlling regulators, what exactly prevents us from making the above a reality in, say, ten years?
I keep asking myself this question, and I still do not have a good answer.
Consider email or a typical messenger. Your computer — or an intermediary server — connects to a single server, sends the message, gets a confirmation, and considers the job done. Whether the message ultimately goes through is up to that service.
Yes, this works 99.99% of the time. But there is no true guarantee — and no real accountability. If a mail server delays or holds your message, you have no practical recourse. The protocol allows this ambiguity.
Blockchains offer a contrast. Transactions are confirmed by independent validators and visible via independent indexers. This works, but it comes with trade-offs. MEV attacks allow nodes to delay transactions to front-run others — technically allowed, but undesirable.
Here is the idea: use error-correcting codes to send messages or transactions directly at the client level.
Split a message into, say, 40 pieces, and add 10 redundant ones. Anyone with any 40 of the 50 pieces can reconstruct the original message. The message is encrypted, so servers cannot read it — only route it.
At the client level — browser, app, or terminal — the system randomly selects 50 nodes out of thousands.
This has three effects.
First, it removes the server bottleneck. Instead of sending one large message to one server, you send 1/40th of it — about 2.5% — to each of 50 nodes, across different networks, continents, and routes.
Second, interfering with message routing becomes far harder. Suppressing a message based on content or headers would require controlling at least 11 of the 50 randomly chosen nodes — highly unlikely with thousands of nodes active at any time.
Third, it introduces real accountability. The client receives cryptographic acknowledgements from all 50 nodes confirming receipt. If those nodes continue serving others while failing to route this message, that is unmistakable foul play.
Node operators could be required to stake funds as insurance. By signing a receipt, they attest that if they remain online and fail to route the message, the sender is entitled to compensation — automatically and within seconds.
In a Web3 setting, such a network can be economically self-sufficient. Users might deposit $10, maintain a $5 minimum balance, and pay $0.00001 per message. As the network grows and its token appreciates, early users may find their messaging effectively free.
Free — and secure.
Messages are encrypted. The system is on-chain, public keys are easy to distribute, enabling messages that are both signed and encrypted so only the intended recipient can read them.
“Only the intended recipient” can already mean complex conditions — multisig approvals, hardware-backed keys, or social consensus. A board decision might require
N members to sign, or company funds released only after M treasury holders approve.This is mathematically sound, physically executable, and beneficial to everyone — except those who profit from the current lack of accountability. We all want our services to behave this way, right?
One last point.
In this design, all Terms of Service are final. You will never receive an email saying they changed “because we care about our users”, or that you "must accept the new terms to continue using the service".
What you agreed to is guaranteed forever — unless the provider shuts down entirely, in which case compensation can be paid from an escrow account on another network. That compensation should exceed what the user paid — reasonable mathematically, since routing messages is really, really not that hard.
And if the provider wants to upgrade, they must ask. You choose whether to migrate. There is no technical way to force you — and that feels like the right default.
Aside from the overall “it’s already good enough” sentiment, and aside from large players aligning ever more closely with increasingly controlling regulators, what exactly prevents us from making the above a reality in, say, ten years?
I’m still trying to wrap my head around the economic and moral aspects of taxation when it comes to paying for online-first products.
These days, this most often means AI agents. Here’s a completely hypothetical example — “asking for a friend.”
A user is working in an IDE with an AI assistant and starts running close to their usage quota. They’re happy with the service and want to pay the provider more money to get additional help from this AI assistant.
Let’s say we’re talking about $10, just to keep things simple.
What the user is ultimately paying for is a SaaS offering. Somewhere, there is hardware hosted in a datacenter performing various tasks — mostly GPU inference, plus orchestration and supporting infrastructure. The service provider will keep a substantial portion of those $10 as profit.
From the user’s perspective, no one really cares where this hardware is physically located. There may be regulatory constraints — especially if the code is private or sensitive — but those concerns fall on the service provider. We’re talking about an individual user.
And let’s assume, for simplicity, that the code is open source, and the author is streaming their work 24/7, making their prompts and development process fully public domain.
At the end of the day, this is just a human being willing to pay $10 for a service that another entity is willing to provide.
I want to calibrate my thinking here on moral grounds.
Is it reasonable to charge this person sales tax based on their physical location?
Is it reasonable to question which entity ultimately receives this money, especially if the $10 is reimbursed by some corporation?
Who should be liable if it turns out that the entity the user paid is hosting its service in a sanctioned region? What if the user didn’t know? To what degree should they be responsible for knowing?
What is morally or ethically wrong if the user is behind a VPN? What if it’s a corporate VPN they are required to use in order to contribute to a particular codebase, and that VPN terminates in a country with no sales tax?
Can the government pursue the user or their umbrella company if the transaction is effectively a barter? For example, suppose the AI assistant provider opens a “token credit line” for $10,000 worth of usage, provided “for free,” as long as the developer allows that company to use the paid version of the very service they are building — also “for free.”
I’m trying to morally map the regulatory landscape as it exists today. Clearly, we don’t want people intentionally paying $10 to exploitative organizations just to save $0.50 in taxes.
But wouldn’t it be more moral to agree that, since the service can be provided from anywhere, it should not be subject to additional taxes at all? As in: use any form of payment you like, support a local vendor if you want, and we’ll do our best to make local support more attractive — rather than making it harder for people to optimize for cost effectiveness.
Something like negative taxation, even. If you’re willing to tolerate an extra ~200 ms of latency by accessing a datacenter farther away — perhaps somewhere sunny, where energy is effectively free — then the operator saves $2 out of those $10. Of those $2, $1 becomes additional operator profit, and the remaining $1 becomes a discount for the user.
I’m genuinely struggling to understand what exactly we are paying for, and how this is justified from a moral perspective. I’m not against taxes per se — I’m just strongly in favor of accountability, and of optimizing for effective resource utilization.
And introducing a sales tax on an online service that can be provided from virtually anywhere on or near Earth does not fit that model — unless I’m missing something important, in which case I’d very much like to be educated.
These days, this most often means AI agents. Here’s a completely hypothetical example — “asking for a friend.”
A user is working in an IDE with an AI assistant and starts running close to their usage quota. They’re happy with the service and want to pay the provider more money to get additional help from this AI assistant.
Let’s say we’re talking about $10, just to keep things simple.
What the user is ultimately paying for is a SaaS offering. Somewhere, there is hardware hosted in a datacenter performing various tasks — mostly GPU inference, plus orchestration and supporting infrastructure. The service provider will keep a substantial portion of those $10 as profit.
From the user’s perspective, no one really cares where this hardware is physically located. There may be regulatory constraints — especially if the code is private or sensitive — but those concerns fall on the service provider. We’re talking about an individual user.
And let’s assume, for simplicity, that the code is open source, and the author is streaming their work 24/7, making their prompts and development process fully public domain.
At the end of the day, this is just a human being willing to pay $10 for a service that another entity is willing to provide.
I want to calibrate my thinking here on moral grounds.
Is it reasonable to charge this person sales tax based on their physical location?
Is it reasonable to question which entity ultimately receives this money, especially if the $10 is reimbursed by some corporation?
Who should be liable if it turns out that the entity the user paid is hosting its service in a sanctioned region? What if the user didn’t know? To what degree should they be responsible for knowing?
What is morally or ethically wrong if the user is behind a VPN? What if it’s a corporate VPN they are required to use in order to contribute to a particular codebase, and that VPN terminates in a country with no sales tax?
Can the government pursue the user or their umbrella company if the transaction is effectively a barter? For example, suppose the AI assistant provider opens a “token credit line” for $10,000 worth of usage, provided “for free,” as long as the developer allows that company to use the paid version of the very service they are building — also “for free.”
I’m trying to morally map the regulatory landscape as it exists today. Clearly, we don’t want people intentionally paying $10 to exploitative organizations just to save $0.50 in taxes.
But wouldn’t it be more moral to agree that, since the service can be provided from anywhere, it should not be subject to additional taxes at all? As in: use any form of payment you like, support a local vendor if you want, and we’ll do our best to make local support more attractive — rather than making it harder for people to optimize for cost effectiveness.
Something like negative taxation, even. If you’re willing to tolerate an extra ~200 ms of latency by accessing a datacenter farther away — perhaps somewhere sunny, where energy is effectively free — then the operator saves $2 out of those $10. Of those $2, $1 becomes additional operator profit, and the remaining $1 becomes a discount for the user.
I’m genuinely struggling to understand what exactly we are paying for, and how this is justified from a moral perspective. I’m not against taxes per se — I’m just strongly in favor of accountability, and of optimizing for effective resource utilization.
And introducing a sales tax on an online service that can be provided from virtually anywhere on or near Earth does not fit that model — unless I’m missing something important, in which case I’d very much like to be educated.
Yesterday I learned about:
This makes your git repository clone-able from
Comparing this to the rest of the industry, such as FlatBuffers ...
... I'd say the Unix way and the Linus way do have the potential to go places.
Rant:It's still giving me nightmares that we're not living in the world where a file is stored in the browser's Local Storage, updated on the go if the version has changed, and then the browser-side JavaScript can just natively it and call functions from it, with no hundred-lines-long aux code. We indeed have moved away from the true path of software engineering some time 10+ years ago.
git update-server-info
python3 -m http.server 8888
This makes your git repository clone-able from
https://localhost:8888/.gitComparing this to the rest of the industry, such as FlatBuffers ...
Error.
Vectors of unions are not yet supported in at least one of the specified programming languages.
This is a hard FlatBuffers limitation, not a tooling or version issue.... I'd say the Unix way and the Linus way do have the potential to go places.
Rant:
.wasmimport👍2😢1
[ Returning the rental BMW X2 ]
— Do you like the car?
— Meh. Literally the worst user experience interface I’ve seen in years. Getting wireless Car Play to work is a pain in the butt, and just using the cable does not work.
— I don’t know what you’re talking about, sir, but many customers have this complaint.
Well. BMW, I hope you are listening. Because this literally is the worst UX I’ve seen in years.
— Do you like the car?
— Meh. Literally the worst user experience interface I’ve seen in years. Getting wireless Car Play to work is a pain in the butt, and just using the cable does not work.
— I don’t know what you’re talking about, sir, but many customers have this complaint.
Well. BMW, I hope you are listening. Because this literally is the worst UX I’ve seen in years.
😁4😱1
Yes: Scan this QR code to pay this bill with Apple Pay.
But: “Type in the table number or check number to continue.”
Folks, you do know QR stands for Quick Response, right? It’s kinda a crime against humanity to not encode this very check number in the QR code printed on the very check, if you ask me.
But: “Type in the table number or check number to continue.”
Folks, you do know QR stands for Quick Response, right? It’s kinda a crime against humanity to not encode this very check number in the QR code printed on the very check, if you ask me.
😁7❤1
Unpopular opinion: I'm starting to respect Yaml.
First, it's JSON-compatible, as in every JSON is a valid Yaml. Which means anything inside a Yaml doc can just be a JSON, literally copy-pasted inside. And which means everything that accepts a Yaml will by extension accept a JSON.
Second, it supports comments and stuff.
Third, I love
Furthermore,
In Python,
Finally, there are Yaml Document Streams, which are just better than my now-second-favorite one-JSON-per-line, JSONL, format. I'd definitely prefer it when human-readability is part of the requirements, or at least a nice-to-have.
First, it's JSON-compatible, as in every JSON is a valid Yaml. Which means anything inside a Yaml doc can just be a JSON, literally copy-pasted inside. And which means everything that accepts a Yaml will by extension accept a JSON.
Second, it supports comments and stuff.
Third, I love
jq and I instinctively typed in yq once — and it did exactly what I expected it to do. Moreover, yq -o json, or just yq -oj, will make it output JSON, nicely formatted, and colored just slightly differently enough to see it's not jq.Furthermore,
yq -P pretty-prints any Yaml, which by extension includes any JSON. It's just more human-readable, with no extra lines for closing } and ], and yet it's 100% machine-readable. Even package.json reads better after | yq -P.In Python,
yaml.safe_load would load the Yaml doc just like json.loads loads the JSON. All the more reasons to keep BaseModel-validated configs Yaml-s, not JSON-s. They are, after all, backwards-compatible.Finally, there are Yaml Document Streams, which are just better than my now-second-favorite one-JSON-per-line, JSONL, format. I'd definitely prefer it when human-readability is part of the requirements, or at least a nice-to-have.
👍1
I got curious recently. With developed countries — the UK among them — tightening laws around VPN usage, how does this actually work for employees of overseas corporations who are required to use a corporate VPN to access company resources?
Surprisingly, this is hard to research. Most online answers try to solve a different problem entirely: whether employers can track where employees log in from. That is not my question.
I am not trying to trick employers. Quite the opposite — I want employers to give employees the freedom to use the Internet as it was intended.
Consider a simple scenario. Someone travels to the UK frequently, but works for a company registered in, say, the Cayman Islands. Per their contract, during business hours they are expected to spend several hours connected to a corporate VPN terminating in Cayman.
Now add a policy amendment. The company:
∙ does not keep VPN logs, and
∙ explicitly encourages employees to use the corporate VPN whenever not doing so could put company business at risk.
During orientation — which, naturally, happens in Cayman! — this is explained plainly. There may be content that is legal in Cayman but problematic when accessed while traveling in the UK. The company wants its employees safe, comfortable, and able to do their jobs without unnecessary exposure.
So the guidance is simple: if you are unsure, use the corporate VPN. The cost is negligible. The risk reduction is not. Better that traffic stays private than visible to hotel staff, local ISPs, or anyone else who does not need to see it.
Employees comply. They use corporate hardware. They use the corporate VPN — as required. From the UK ISP’s perspective, they are simply connected to a Cayman endpoint. Work traffic, personal email, private messages during natural breaks in the workday — all indistinguishable.
So where is the catch?
To be clear, I am not endorsing using VPNs to break laws. This is a thought experiment. If someone connects to a VPN specifically to access content they are forbidden to access locally, that is not defensible. But that is not what this scenario is about.
What, then, is the status quo?
Will the UK refuse to allow people to connect to corporate VPNs unless those VPNs provide government backdoors? Will it make it illegal for foreign companies to operate in the UK without traffic inspection capabilities?
I am trying to understand where the line is supposed to be between:
∙ protecting traffic for legitimate reasons — corporate security, privacy, risk management, and
∙ protecting traffic for questionable reasons — accessing things one should not.
These two are technically indistinguishable.
No country is trying to stop visitors from China from reading Wikipedia. China may disagree, and China may want to enforce its own rules later — that is a separate issue. But my hypothetical runs in the opposite direction. Cayman Islands is a reputable jurisdiction that happens to trust its people to know what not to look for online.
So what is the right moral compass here? And more importantly — where do we expect this to go over the next few years?
Because the Internet does not recognize borders. But laws increasingly pretend that it does.
PS: I do not know whether Cayman Islands allow online adult content. But my hypothetical argument should hold regardless.
Surprisingly, this is hard to research. Most online answers try to solve a different problem entirely: whether employers can track where employees log in from. That is not my question.
I am not trying to trick employers. Quite the opposite — I want employers to give employees the freedom to use the Internet as it was intended.
Consider a simple scenario. Someone travels to the UK frequently, but works for a company registered in, say, the Cayman Islands. Per their contract, during business hours they are expected to spend several hours connected to a corporate VPN terminating in Cayman.
Now add a policy amendment. The company:
∙ does not keep VPN logs, and
∙ explicitly encourages employees to use the corporate VPN whenever not doing so could put company business at risk.
During orientation — which, naturally, happens in Cayman! — this is explained plainly. There may be content that is legal in Cayman but problematic when accessed while traveling in the UK. The company wants its employees safe, comfortable, and able to do their jobs without unnecessary exposure.
So the guidance is simple: if you are unsure, use the corporate VPN. The cost is negligible. The risk reduction is not. Better that traffic stays private than visible to hotel staff, local ISPs, or anyone else who does not need to see it.
Employees comply. They use corporate hardware. They use the corporate VPN — as required. From the UK ISP’s perspective, they are simply connected to a Cayman endpoint. Work traffic, personal email, private messages during natural breaks in the workday — all indistinguishable.
So where is the catch?
To be clear, I am not endorsing using VPNs to break laws. This is a thought experiment. If someone connects to a VPN specifically to access content they are forbidden to access locally, that is not defensible. But that is not what this scenario is about.
What, then, is the status quo?
Will the UK refuse to allow people to connect to corporate VPNs unless those VPNs provide government backdoors? Will it make it illegal for foreign companies to operate in the UK without traffic inspection capabilities?
I am trying to understand where the line is supposed to be between:
∙ protecting traffic for legitimate reasons — corporate security, privacy, risk management, and
∙ protecting traffic for questionable reasons — accessing things one should not.
These two are technically indistinguishable.
No country is trying to stop visitors from China from reading Wikipedia. China may disagree, and China may want to enforce its own rules later — that is a separate issue. But my hypothetical runs in the opposite direction. Cayman Islands is a reputable jurisdiction that happens to trust its people to know what not to look for online.
So what is the right moral compass here? And more importantly — where do we expect this to go over the next few years?
Because the Internet does not recognize borders. But laws increasingly pretend that it does.
PS: I do not know whether Cayman Islands allow online adult content. But my hypothetical argument should hold regardless.
Looks like my most valuable software development & architecture skill of the past ~ten years is indeed only getting more valuable.
I love producing small, clean, self-contained examples. To understand various concepts better, to explain them better, and to ultimately pick which ones to use and which one to ditch.
And this skill is very, very well aligned with AI-assisted coding!
Because the AI can hack up most simple examples well, and it can tweak them almost perfectly and almost instantly. What it lacks is the sense of beauty.
Both in clarity — is the code aesthetically pleasing to read? And in durability — if we introduce this code to a team of fellow humans, will it proliferate through the codebase in a good way, or will it grow like a bad tumor?
Perhaps in 5+ years my full-time job will be trying out various patterns with and without AI, and labeling them — manually, with experts, with the general public, and with, well, other AIs.
And then maybe people like me will be designing programming languages for the 21st century — because we're long overdue.
I love producing small, clean, self-contained examples. To understand various concepts better, to explain them better, and to ultimately pick which ones to use and which one to ditch.
And this skill is very, very well aligned with AI-assisted coding!
Because the AI can hack up most simple examples well, and it can tweak them almost perfectly and almost instantly. What it lacks is the sense of beauty.
Both in clarity — is the code aesthetically pleasing to read? And in durability — if we introduce this code to a team of fellow humans, will it proliferate through the codebase in a good way, or will it grow like a bad tumor?
Perhaps in 5+ years my full-time job will be trying out various patterns with and without AI, and labeling them — manually, with experts, with the general public, and with, well, other AIs.
And then maybe people like me will be designing programming languages for the 21st century — because we're long overdue.
🔥4❤2👍2
I’m sincerely wondering: are there high-profile tech companies that explicitly focus on doing more work async?
It’s kind of trivial. “Let’s have a call about this” should be declared a sign of unprofessionalism — if not outright a banned phrase — for lack of, well, empathy and integrity.
There would be a culture of emails, and a culture of not expecting immediate answers. A culture of doing one’s own research, and a culture of asking for help politely, at the right time, with the right granularity when providing context.
There would be mid- to long-form documents, and a culture of keeping them up to date. Documents with collapsible sections that contain non-trivial yet essential details, for those who need to dig deeper.
There would be scheduled meetings, within the team and cross-team. Single-digit hours per week. Agreed upon by everybody.
And introducing a new meeting — unless it’s consensual right away — would require some formal “board approval.” Any and all direct or indirect pressure to make new meetings happen, or to somehow guilt others into joining unnecessary meetings, would be hunted down and promptly eliminated.
What’s not to love about this?
It’s kind of trivial. “Let’s have a call about this” should be declared a sign of unprofessionalism — if not outright a banned phrase — for lack of, well, empathy and integrity.
There would be a culture of emails, and a culture of not expecting immediate answers. A culture of doing one’s own research, and a culture of asking for help politely, at the right time, with the right granularity when providing context.
There would be mid- to long-form documents, and a culture of keeping them up to date. Documents with collapsible sections that contain non-trivial yet essential details, for those who need to dig deeper.
There would be scheduled meetings, within the team and cross-team. Single-digit hours per week. Agreed upon by everybody.
And introducing a new meeting — unless it’s consensual right away — would require some formal “board approval.” Any and all direct or indirect pressure to make new meetings happen, or to somehow guilt others into joining unnecessary meetings, would be hunted down and promptly eliminated.
What’s not to love about this?
❤2😁1
Just had my first experience with Github Copilot code reviewing my code.
TL;DR: It sucks.
7 comments. One legit, fixed the typo of
I stand by my position: it's not impossible that humans will be useful as those meat bags with brains who actually care to understand what is going on behind the scenes. While the value of "making changes" to the code will continue hitting rock bottom.
TL;DR: It sucks.
7 comments. One legit, fixed the typo of
${1?: -> ${1:?. Other six are just "this will not work because paths blah blah blah", while in reality I've triple-checked the code myself, and one thing I definitely am sure of is that it works under five different setups.I stand by my position: it's not impossible that humans will be useful as those meat bags with brains who actually care to understand what is going on behind the scenes. While the value of "making changes" to the code will continue hitting rock bottom.
👍4
Is there a quick way to have Linux support MacOS keyboard shortcuts?
I'm a Mac user now, but I still love linux. My keyboard is wireless. With one keystroke it goes from one laptop to another and back.
It'd be great to use the Cmd+C / Cmd+V, as well as Cmd+Enter, Cmd+L, etc. on Linux. Ideally, without even having to flip the physical Win/Mac switch on the keyboard.
Realistically, I can use any Linux these days. Everything is in Docker anyway. So if Ubuntu/Debian is not great for this purpose, I'm willing to give something else a shot. It's the New Year's week after all, might as well cheer the nerd in me up.
So just some zero-configuration reliably working way to have Mac shortcuts work on Linux would be great. I've tried manual mapping, but it's more painful and more fragile than I anticipated. Although if there is a tool or a script or a Github repo for Ubuntu that does the trick, I'd give it a try beforehand.
What do folks like me do these days?
I'm a Mac user now, but I still love linux. My keyboard is wireless. With one keystroke it goes from one laptop to another and back.
It'd be great to use the Cmd+C / Cmd+V, as well as Cmd+Enter, Cmd+L, etc. on Linux. Ideally, without even having to flip the physical Win/Mac switch on the keyboard.
Realistically, I can use any Linux these days. Everything is in Docker anyway. So if Ubuntu/Debian is not great for this purpose, I'm willing to give something else a shot. It's the New Year's week after all, might as well cheer the nerd in me up.
So just some zero-configuration reliably working way to have Mac shortcuts work on Linux would be great. I've tried manual mapping, but it's more painful and more fragile than I anticipated. Although if there is a tool or a script or a Github repo for Ubuntu that does the trick, I'd give it a try beforehand.
What do folks like me do these days?
🔥3
I stopped myself from writing a long post on Docker, but here's the most interesting part.
First, docker leaks containers.
Consider this
If you run it locally multiple times it'd print one, two, three, etc.
Now consider this Dockerfile:
It will also always print 1 if you do
Each of these runs yields a new container! It will not show in
Alas, the universes of images and containers are easy to confuse.
Behold:
This runs this new container and calls it
You can't do
You can re-run it though, just
Now, add
Now run
Because for the very container called
And if instead of
The takeaway from this point is that the universe of running-and-stopped containers exists separately from the universe of built-and-possibly-tagged images.
And then it's "trivial" to wrap one's head around. Because
Third, docker compose silently re-uses containers.
Consider this
Because while
The "universe of docker compose container names" also exists. It is the same as the universe of docker containers, but with "tricky" naming. The default is the parent directory of
You could also do
Fourth, and this is bizarre, volumes are not pruned.
Try this:
And there exists no simple way prune all containers tied to a volume. Here's the "shortest" way:
This "one-liner" is literally at the beginning of my scripts that are meant to be fast, self-contained, and reproducible.
PS:
PS2: Yes, this is why the use of
If you've learned something today, my half an hour of typing this was not wasted. You're welcome.
First, docker leaks containers.
Consider this
inc.sh:#!/bin/sh
echo $(( $(cat /tmp/n.txt 2>/dev/null) + 1 )) | tee /tmp/m.txt && mv /tmp/m.txt /tmp/n.txtIf you run it locally multiple times it'd print one, two, three, etc.
Now consider this Dockerfile:
FROM alpine
COPY ./inc.sh /inc.sh
ENTRYPOINT ["/inc.sh"]
If you run this command multiple times, it will always print 1:docker run $(docker build -q .)It will also always print 1 if you do
docker build -t dima . once, followed by docker run dima repeatedly.Each of these runs yields a new container! It will not show in
docker ps or docker container ls, but it will in docker container ls -a.Alas, the universes of images and containers are easy to confuse.
Behold:
docker run --name dima dimaThis runs this new container and calls it
dima. Now there's dima the image and dima the container.You can't do
docker run --name dima dima again because the container called dima already exists, even though it has terminated a long time ago.You can re-run it though, just
docker start dima.
Second, docker leaks volumes.Now, add
VOLUME /tmp to the end of the Dockerfile, and re-do the container:docker container rm dima; docker run --name dima dima
Now run
docker start dima several times. And say docker logs dima. Or just run docker start -i dima.
The number will keep increasing.Because for the very container called
dima there now exists a volume!And if instead of
docker start dima you run docker run dima, it will always print 1. And now we know why: because for each of these runs, a new volume is created. And leaked.The takeaway from this point is that the universe of running-and-stopped containers exists separately from the universe of built-and-possibly-tagged images.
And then it's "trivial" to wrap one's head around. Because
docker run takes an image, and docker start takes a container.Third, docker compose silently re-uses containers.
Consider this
docker-compose.yml:
services:
dima:
build: .
The third line might as well read image: dima.
Now run docker compose up several times. The number will keep going up!Because while
docker run creates a new container every time, docker compose will create containers once.The "universe of docker compose container names" also exists. It is the same as the universe of docker containers, but with "tricky" naming. The default is the parent directory of
docker-compose.yml, followed by a minus sign, followed by the name of the service, followed by a minus sign, followed by the index, starting from 1.
Running docker compose down will actually wipe the volume. But who does docker compose down for one-off pipelines, right?You could also do
docker compose run dima. But you would not if your compose topology consists of several containers. Because up is the way to go.Fourth, and this is bizarre, volumes are not pruned.
Try this:
docker compose up && docker volume prune -f && docker compose up
The command to prune volumes does not prune them!And there exists no simple way prune all containers tied to a volume. Here's the "shortest" way:
for i in $(docker ps -a -q --filter volume=$VOLUME); do docker container stop $i; docker container rm -f $i; done; docker volume rm $VOLUME
This "one-liner" is literally at the beginning of my scripts that are meant to be fast, self-contained, and reproducible.
PS:
docker compose up does not rebuild containers by default. So, unless you truly want to run the older version, docker compose up --build is a safe default.PS2: Yes, this is why the use of
VOLUME is discouraged in Dockerfile-s. But quite a few containers do have VOLUME-s, for instance, the posgres container. So it keeps data between runs; what's worse, it keeps table schemas too. What a wonderful footgun: your app's DB init code is broken but you're blissfully unaware!If you've learned something today, my half an hour of typing this was not wasted. You're welcome.
👍3🔥1
On a completely unrelated note, Veritasium's IQ Test video features Derek looking through a Russian textbook.
The top section of which appears to be the handwriting of a nine years old, likely some exercise homework. Although the hand does not look Derek's, hinting at a simpler explanation.
What's not to love about this? Because surely, with this followers count, we are 100% destined to notice it.
PS: And, on another video of Veritasium today, YouTube was showing me a mental health hotline. Perhaps my browsing history does paint me as a prick when I'm feeling unwell for a day, and choose to watch something mildly educational and fun instead of good old Starcraft.
The top section of which appears to be the handwriting of a nine years old, likely some exercise homework. Although the hand does not look Derek's, hinting at a simpler explanation.
What's not to love about this? Because surely, with this followers count, we are 100% destined to notice it.
PS: And, on another video of Veritasium today, YouTube was showing me a mental health hotline. Perhaps my browsing history does paint me as a prick when I'm feeling unwell for a day, and choose to watch something mildly educational and fun instead of good old Starcraft.
❤2
#Russian
Спасибо @apakhmv, рассказал про Web3. Ещё в прошлом году, а сейчас отлично смонтировали: https://t.iss.one/tfeat/156
По-моему получилось офигенно.
Спасибо @apakhmv, рассказал про Web3. Ещё в прошлом году, а сейчас отлично смонтировали: https://t.iss.one/tfeat/156
По-моему получилось офигенно.
Telegram
Тысяча фичей
59. Основы Web3: Blockchain и Ether.
Глубокое техническое обсуждение технологии блокчейн, эфириума и Web3 с инженерной точки зрения. Дмитрий Королев объясняет принципы работы распределенных сетей, смарт-контрактов, консенсус-алгоритмов Proof of Work и Proof…
Глубокое техническое обсуждение технологии блокчейн, эфириума и Web3 с инженерной точки зрения. Дмитрий Королев объясняет принципы работы распределенных сетей, смарт-контрактов, консенсус-алгоритмов Proof of Work и Proof…
While I dislike Python (and prefer Rust, hehe), one thing it teaches you is that the old "enterprise-grade" Java-world "skillz" are long obsolete.
Because stuff should really a) be short, simple, and descriptive, and b) "just work" (c).
Seriously, I sincerely believe good software engineering taste is that short and clean code with fewer dependencies is generally what we need.
Cargo kicks ass in most aspects here. Python, especially with
Did not expect myself to say this, but having to deal with TeamCity configuration via
I remember the times when I argued how bad Github's Yaml-based Actions config is. Well, sure, strong typing would be great there — have you considered Cargo and Rust?
But boy, using JVM and
Challenge accepted. I'll do it. But it's painful af so far.
Because stuff should really a) be short, simple, and descriptive, and b) "just work" (c).
Seriously, I sincerely believe good software engineering taste is that short and clean code with fewer dependencies is generally what we need.
Cargo kicks ass in most aspects here. Python, especially with
uv — which is a Rust tool by itself! — is surprisingly okay.Did not expect myself to say this, but having to deal with TeamCity configuration via
.teamcity/* give me shivers. It's been hours, and I can't make "Hello, world!" work. On my own TeamCity instance.I remember the times when I argued how bad Github's Yaml-based Actions config is. Well, sure, strong typing would be great there — have you considered Cargo and Rust?
But boy, using JVM and
gradle to run "workflows" so that I'm getting dozens of unreadable "Kotlin compilation errors" while all I need is to run echo 'Hello, world!'? Call me crazy, but my take here is that the crazy side here is the one that accepts how convoluted this whole thing is.Challenge accepted. I'll do it. But it's painful af so far.
🔥2
This post carries a trivial message, but I learned the hard way that its implications are not at all obvious.
The trivial message is: Fixing LLM hallucinations is fundamentally no different from fixing similar failure modes in the human brain.
Corollary: The human brain has basic, low-level failure modes that trace back to a few misfiring neurons.
Here's my mental model. I do not claim it is correct, only that it maps reality reasonably well.
Humans share a tiny set of deeply hard-coded concepts: “good,” “fair,” “just,” “divine,” “love,” “duty,” “pleasure,” “dignity,” “loyalty,” “sanctity,” “disgust,” and a few more. They fit on two hands.
But modern civilization is far too complex and contradictory. Worse, countless actors today are aggressively “prompt-engineering” every human being for their own agendas. The cost of experimentation is near zero and the payoff enormous, so state and non-state actors have no reason not to try to “[re-]program” us. This mass-scale “civilizational programming” has reached heights unthinkable a decade or two ago. And it works.
Many things follow from this model; I will outline one minor and one major.
Minor: Remember that every person’s political and moral views reflect the nonstop nonsense they ingest. Reasonable people can debate the degree of personal responsibility to resist propaganda. But one thing is clear: most people simply repeat talking points without applying any critical scrutiny.
This is not new; what is new is the scale. Our echo chambers and propaganda engines now produce large populations who appear completely deranged—advocating agendas detached from their own lived reality and even harming themselves and their families. Activism can be noble; self-sacrifice for something worthless is emotional deficiency, not virtue.
Major: This applies to you as well — perhaps less than to most if you are reading this, but the logic stands.
No one is immune to stimuli aimed at the inner neurons of “happiness,” “safety,” “self-actualization,” etc. The only viable strategy, if sanity is a priority, is to consciously pick your echo chambers and aggressively filter emotionally charged content.
You also need resistance mechanisms — real ones, not coping mechanisms.
For example, I often find myself caring too much about the emotional state of the average human. It arguably damages my personal life. My rational brain knows exactly what restores balance: recognizing how unsalvageable many people are. Walking past a row of slot machines in Vegas and seeing hundreds of empty eyes pouring millions of dollars into pure uselessness forces me to internalize a basic truth: I cannot meaningfully extend compassion to everyone.
(Yes, gambling addiction is a real disease, and regulations exist for a reason. But most people at those machines are not addicts — they are just “regular humans,” as a friend succinctly puts it. Acknowledging that fact helps me care less emotionally, which is one of many mental tricks I utilize to stay sane.)
The takeaway is: There is nothing wrong or shameful in maintaining an arsenal of mental tricks. To live one’s own life in our increasingly hostile informational environment, we will need stronger internal tools. Begin building them early on, if only to Live Long and Prosper!
The trivial message is: Fixing LLM hallucinations is fundamentally no different from fixing similar failure modes in the human brain.
Corollary: The human brain has basic, low-level failure modes that trace back to a few misfiring neurons.
Here's my mental model. I do not claim it is correct, only that it maps reality reasonably well.
Humans share a tiny set of deeply hard-coded concepts: “good,” “fair,” “just,” “divine,” “love,” “duty,” “pleasure,” “dignity,” “loyalty,” “sanctity,” “disgust,” and a few more. They fit on two hands.
But modern civilization is far too complex and contradictory. Worse, countless actors today are aggressively “prompt-engineering” every human being for their own agendas. The cost of experimentation is near zero and the payoff enormous, so state and non-state actors have no reason not to try to “[re-]program” us. This mass-scale “civilizational programming” has reached heights unthinkable a decade or two ago. And it works.
Many things follow from this model; I will outline one minor and one major.
Minor: Remember that every person’s political and moral views reflect the nonstop nonsense they ingest. Reasonable people can debate the degree of personal responsibility to resist propaganda. But one thing is clear: most people simply repeat talking points without applying any critical scrutiny.
This is not new; what is new is the scale. Our echo chambers and propaganda engines now produce large populations who appear completely deranged—advocating agendas detached from their own lived reality and even harming themselves and their families. Activism can be noble; self-sacrifice for something worthless is emotional deficiency, not virtue.
Major: This applies to you as well — perhaps less than to most if you are reading this, but the logic stands.
No one is immune to stimuli aimed at the inner neurons of “happiness,” “safety,” “self-actualization,” etc. The only viable strategy, if sanity is a priority, is to consciously pick your echo chambers and aggressively filter emotionally charged content.
You also need resistance mechanisms — real ones, not coping mechanisms.
For example, I often find myself caring too much about the emotional state of the average human. It arguably damages my personal life. My rational brain knows exactly what restores balance: recognizing how unsalvageable many people are. Walking past a row of slot machines in Vegas and seeing hundreds of empty eyes pouring millions of dollars into pure uselessness forces me to internalize a basic truth: I cannot meaningfully extend compassion to everyone.
(Yes, gambling addiction is a real disease, and regulations exist for a reason. But most people at those machines are not addicts — they are just “regular humans,” as a friend succinctly puts it. Acknowledging that fact helps me care less emotionally, which is one of many mental tricks I utilize to stay sane.)
The takeaway is: There is nothing wrong or shameful in maintaining an arsenal of mental tricks. To live one’s own life in our increasingly hostile informational environment, we will need stronger internal tools. Begin building them early on, if only to Live Long and Prosper!
🔥3
The more I’m thinking of where the world is going the more I’m convinced its trajectory is almost exclusively determined by the answer to one question.
Is unconstrained communication the property of the Universe, or is it a social construct?
If it’s the universal property, that would simply mean that any and all at-scale censorship and speech control mechanisms will fail. We can assume they are all ephemeral and temporary, like the Prohibition. Humankind may well eventually give up alcohol altogether, but we appear to have collectively agreed that trying to out right ban it deals more harm than good.
If it’s a social construct, we have to declare that the days of free Internet are gone for good as of some ten years ago. Orwell then just happened to predict the future by generalizing a few observations well.
I know I personally would prefer to live in the world of free communication. Just imagine mesh networks work at any reasonable distance, below any reasonable signal-to-noise ratio, completely undetectable, except the very entity to which / to whom this particular piece of communication is directed.
Yes, I get it, such a world presents major challenges — from tearing apart the social fabric, all the way to literal military risks h heard of before. But if we manage to sustain our civilization, we’d be up to a great start, to conquer the Solar System and beyond.
And yes, I also get it that if the goal is purely to create a “safe and flourishing” world, collectively agreeing that free and unconstrained communication was just a fluke may well be the best first step.
Thankfully, we don’t have to decide any time soon. Various experiments, from European regulations to swarms of self-flying drones, are underway as we speak. We may well have time to course-correct at multiple bifurcation points if and as needed.
But I have to confess declaring free communication dead is something I would feel quite bitter about. And in quite a few corners of the world it can and should be pronounced dead today.
Is unconstrained communication the property of the Universe, or is it a social construct?
If it’s the universal property, that would simply mean that any and all at-scale censorship and speech control mechanisms will fail. We can assume they are all ephemeral and temporary, like the Prohibition. Humankind may well eventually give up alcohol altogether, but we appear to have collectively agreed that trying to out right ban it deals more harm than good.
If it’s a social construct, we have to declare that the days of free Internet are gone for good as of some ten years ago. Orwell then just happened to predict the future by generalizing a few observations well.
I know I personally would prefer to live in the world of free communication. Just imagine mesh networks work at any reasonable distance, below any reasonable signal-to-noise ratio, completely undetectable, except the very entity to which / to whom this particular piece of communication is directed.
Yes, I get it, such a world presents major challenges — from tearing apart the social fabric, all the way to literal military risks h heard of before. But if we manage to sustain our civilization, we’d be up to a great start, to conquer the Solar System and beyond.
And yes, I also get it that if the goal is purely to create a “safe and flourishing” world, collectively agreeing that free and unconstrained communication was just a fluke may well be the best first step.
Thankfully, we don’t have to decide any time soon. Various experiments, from European regulations to swarms of self-flying drones, are underway as we speak. We may well have time to course-correct at multiple bifurcation points if and as needed.
But I have to confess declaring free communication dead is something I would feel quite bitter about. And in quite a few corners of the world it can and should be pronounced dead today.
👍4❤2
It’s remarkable how many solid language-design choices emerge once you commit to treating types as a zero-overhead runtime abstraction.
🥰3👍1🤔1