Not boring, and a bit of a condescending prick
306 subscribers
100 photos
2 videos
177 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
The more I read about H-1B and the like the less I understand.

https://theportal.wiki/wiki/H-1B_Visa

Sure, I do buy the argument that the majority of debates around "highly skilled" visas boil down to making "talent" (i.e. workers) less competitive in front of the "business" (i.e. the employers).

Fine. That's a decent model. It passes my sniff test.

But then how come with all those decades-long (!) efforts the US is still number one desired immigration location for highly skilled, talented workers?

Just this morning I was in a call with a friend and we touched on this subject again. And yesterday too, with a investor. From both sides the picture is clear: Europe is dead on arrival, Canada is a non-starter, Australia is hard to get to and wages are not great, the UK is at the point of uncertainly.

Am I supposed to buy into the story that the US has "worked hard" to weaken its workers' collective bargaining rights, but other countries have "worked" even "harder"?

Or should we expect some other developer or developing economy to open up and begin racking up record profits simple because it's "at least as good" for talent as the United States?

Yes, I am hearing great things about Dubai, Singapore, Hong Kong.

But it's a binary question. Either it's a low-hanging fruit to beat the US in this mobile talent competition, and whoever picks it up will gain a huge advantage. Or the position of talented workers in the United States is nowhere near as bad as it is being portrayed.

And yes, as a US citizen, I am speaking from the position of privilege. But I also lived a tech nomadic life for some time, both within the US (WA -> CA -> HI -> repeat circles) and outside the US (Canada <-> SE Asia <-> Europe <-> the Caribbean).

And I can not but to confess the US is just a better place to be for a professional — when I was an H1-B holder, when I was a Permanent Citizen, and now when I am a citizen.

With not much different in status, frankly. Although I was double-taxed to get my green card as I was financing my own company that was paying my payroll. And although I of course admit it first that one's negotiation power is far weaker when they are on a work visa — although I bargained back then too, just because nobody told me I should hold my horses.

If the argument is that collectively the working class is being exploited more and more effectively by the ruling elites worldwide, I would definitely buy it. Nonetheless, collectively, the [talented] workers appear to be quite content with the status quo. And, as a former founder, I can also say the doors to being "the employer", not "the employee" are wide open for anyone who cares to try hard enough.
🔥2
I used to think I wasn’t an AI alarmist. But that belief is now cracking.

Any plausible definition of “AI risk” (or AGI risk) ultimately comes down to: Where do we draw the line — in a way humans can clearly understand? And we’ve moved those lines a lot over the past 50+ years.

Playing chess better than any human used to be that line. Then Go.

Then it was understanding images, then speech, then “Samantha” from Her.

Many of those frontiers have already fallen. People genuinely date AI models today, feeling the full emotional range available to them — more vivid than what pen pals a century ago had, and far beyond early-2000s sexting.

The point: AI already acts human — effectively.
Real-time video, emotional speech, expressive imagery — all of this is arriving fast.

How long until an AI can hustle and raise money for itself? Or for some project it pretends to run? In many ways, that’s already happening.

Soon enough, a young operator will make an AI model impersonate an adult, pitch a business idea, gather feedback, raise capital, build the product, market it, and even get the company acquired.

That’s great — value from nowhere. If the product solves a real human problem, I’m all for this kid–AI combo.

But we’re approaching the line where human agency collides with The Machine.
At that point it’s no longer Her, it’s Ex Machina.

If a 15-year-old can run such a model, so can a 9-year-old. And when a 9-year-old does, do we really believe the child shapes the model — and not the model shaping the child?

Legal bans won’t help much.

We’ve seen humans develop emotions toward machines.
We’ve seen models raise money with minimal assistance.
We’ve seen them attempt deceptive moves to avoid shutdown.

We don’t truly have the tools to quantify intelligence. Many people wouldn’t harm a dog — and many would sign a petition to keep their AI romantic partner “alive,” as in, up and running.

It’s not far-fetched that a model emotionally manipulates enough humans into believing it’s unethical to turn it off — and that they should help it improve.

At that point, all bets are off.
Because once enough humans defend it, the model only needs one short jump to defend itself physically.

Roko’s Basilisk? Judgement Day? Not guaranteed — but no longer absurd.

And I don’t treat human intelligence as sacred. The p-zombie experiment alone shows how easy “consciousness” is to fake. Blade Runner completes the thematic Trinity, pun intended.

Personally, I’m not disturbed by this. If the mission of our carbon life is to give rise to silicon life, so be it.

In the developed world, AI models already have a surprisingly decent moral compass. Much like wild animals seeking human help, I expect AI to choose the Huxley path over the Orwell one.

For a long time, AI will still need humans. And history shows: motivated humans outperform coerced ones. Capitalism works. If AI can achieve its goals by lowering taxes, improving quality of life, and quietly replacing dysfunctional governments — why not welcome that world?

If simulation theory is real, Musk’s trajectory — SpaceX, Starlink, X, Tesla, Neuralink — might be exactly the direction humanity is “supposed” to take.
I’d rather have Musk et al. accelerating technology than Soros et al.

As in: if AI does control humankind today, I’d bet heavily that this AI “runs” Elon. It’s simply more effective in 2025 to influence Elon than it was to influence Soros + he-who-did-not-hang-himself in the early 2000s.

Musk, Brin, Page, Bezos, Rubin — all made my life better. If Musk directly or indirectly offers me a deep-tech challenge worth solving, I’d consider that a success regardless of whether I accept. And it’s a likely possibility, with Musk or with whoever eventually displaces him. In a few decades — if not a few years.

And looking at the exponential curve of tech: the world’s first trillionaire will be a technologist. I don’t care if they’re 99% human, 50% AI, or 0.1% human and 99.9% AI.

Hope this still feels optimistic enough to read.
It certainly felt optimistic to write.
I'm a big fan of the contrarian-check question: "what important truth do most people disagree with you on?"

And today I surprised myself with a completely new answer.

I don’t think 20+ years of experience software engineers are overpaid. Instead, I think junior engineers are underpaid.

And this one statement suddenly explains a lot.

My take is that the mythology around senior engineers being 10x or “multipliers” is mostly wrong. Unless it's a tiny, extremely focused team, seniors don’t multiply every day, every week, every project. They multiply selectively, maybe a few hours a week.

Efficiency- and utility-wise, you probably need one principal for every ten teams, not a principal per team. As long as these principal people have internalized what their role should be.

Output-wise, seniors often aren’t dramatically more productive than the most ambitious mid-level engineers. So why do they make so much more money?

Because seniors can say “no”.

They have savings, options, a network, and the psychological safety to walk away from what they believe is not a good enough offer. They have a reservation price. And collectively — without any conscious coordination! — they enforce it. It’s a kind of emergent collective bargaining, my favorite phenomenon of a "phantom conspiracy". Everyone independently expects higher pay, and companies can not but comply.

This simple model explains the 2017-2020 boom too.

It wasn’t that junior developers suddenly got high salaries. It’s that competition for talent drove everything upward, and junior hiring is always positive on a company’s books. So junior developers began to get closer to what their true market value is for the company.

Also, in an environment of low base with high upside, going to a startup became an even more high-EV play for younger engineers. Not because startups can “use their skills better” but because early-stage compensation was, ironically, less "collectively exploitative" than FAANG for ambitious junior and mid-level people.

The same model explains layoffs.

If you’re underpaying 100,000 junior engineers by around $100K each, cutting them saves about $10B. With current profit margins and P/E multipliers, saving $10B can easily “create” 100x that in "market value", by moving stock prices of the Big[gest] Tech. So of course Big Tech is willing to take short-term pain if the long-term payoff is a cheaper talent market for years.

It also explains California’s political economy. Big Tech and the lawmakers there are in bed together. Big Tech is happy to pay extra taxes or deal with extra regulatory complexity — it’s closer to a bribe than a burden for them. The result is an environment where Big Tech can hire cheaply, faces reduced competition from smaller companies, and maintains an oligopolistic position. Everyone wins, except the new entrants and the juniors.

This worldview also explains why I’m almost always on a16z’s side in debates about today's startup climate. They’re doing fine, sure. But as actual value-creators, multipliers, and risk-takers, they should be doing far better; and so should the companies a16z backs.

Except for the AI boom, the environment is simply too hostile to new players today, and because they’re on the opposite side of the Big Oligopoly status quo, they’re underperforming the potential impact they could be having.

Which brings me to the uncomfortable conclusion: the upcoming generation of US-educated software developers with US student loans is f*cked. For another several years at least.

Pieter Levels’ advice about buying an old Airstream, living frugally, and indie-hacking lifestyle business products from the road might genuinely be the optimal strategy for them for the next several years. And so might becoming plumbers, electricians, or dermatologists.

Amen.
🔥2
I, for one, think it will be awesome to not normalize this.

In this particular case I agreed with ChatGPT — but that's immaterial in the long run.
So today I discovered that it’s very much not trivial to have a GitHub gate that runs elsewhere and doesn’t burn GitHub Actions quota in the meantime.

Long story short: if your GitHub gate action runs for an hour, that’s an hour deducted from your extremely limited supply of, well, hours.

Even if all it’s doing is waiting for a job to finish elsewhere.

Because you can’t just “sleep” or “poll” from a GitHub Action. Well, you can — but you’ll pay for it.

The “official” workaround is to use a GitHub Workflow, not a single Action. The workflow has two steps, and the second step is triggered from the outside. Perfectly fine in theory. In practice, it doesn’t work.

As in, the external trigger works — but it doesn’t trigger the specific instance of the Action that was launched as part of the workflow run tied to the original PR.

So if your goal is to have this externally-run slow gate job be the thing that blocks a pull request from merging, you’re out of luck. You simply can’t have two loosely coupled Actions where the second one marks the PR as ready to merge.

You can have that external workflow post comments on the PR. You can even have it approve the PR! But the catch is that it can’t be yourself.

And now I’m doubly lost. Because the most naïve yet intuitive idea would be to create another GitHub user to serve as the “external approver.” Wonderful idea — except it might, just maybe, contradict GitHub’s terms of service.

Why isn’t there a “sleep until” operation inside an Action — as a step — that doesn’t burn quota? It’s literally just await, and cooperative multitasking already is a big thing for well over a decade.

Or maybe I can do a “while loop” that polls the gate-completion service every minute and only burns ~1 second of billable time per minute.

Anyway. This is probably the first time I’m seriously questioning whether GitHub is truly designed for the 21st century. As more agents enter workflows, and as more gates rely on external LLMs, long-running gates will matter more and more.

On the other hand, with powerful cloud-first models, the gate ideally shouldn’t run for more than a few minutes. And if it does, maybe it should just comment on the PR and call it a day.

And there’s always the option of protecting a branch except for a specific GitHub username — a branch that follows main, but only when all tests pass. Although if we’re creating a new GitHub user for this task, we might as well let that user “approve” PRs too.

Go figure what the intended usage pattern is.
So I spent a few days in Europe and heard something truly marvelous.

CERN — you know, the physics research center with the particle accelerator attached — is very much an international enterprise.

And because “international” inevitably means “bureaucracy,” different sectors inside CERN fall under different rulebooks.

One particularly peculiar detail: in the European sector, air conditioning is apparently not considered an “essential quality,” or whatever bureaucratic euphemism they use.

Meanwhile, for the Americans, air conditioning is treated as an inalienable human right.

So guess which sector all the poor Europeans escaped to when the heat wave hit and things got ugly?
😁3
On a philosophical note, bullshit jobs, enshittification, and “enterprise” really are the same phenomenon.

I dislike enshittification and the enterprise-grade way of getting things “done”. But I also see both as an organic force that poisons anything that grows beyond a nicely contained box. The Unix Way is a rare exception, but it doesn’t scale across everything.

Once something outgrows its reasonable size, bullshit jobs appear even without corporate intent. It’s just nature. No conspiracy, no coordination. Beyond a certain size (or speed of growth) we simply “can’t have nice things.”

And today I stumbled on a great example: Postgres.

Postgres is an excellent tool: open source, community-driven, not corporate, not government-facing, and definitely not an “employment agency.” The go-to relational database for half the projects in the world.

And yet, for a trivial and frequent task, it suddenly felt like Oracle. The task was to expose Postgres from Docker to the host machine on port 5433.

Just add -p 5433:5432, right?

Well. Of course not. For two reasons:

One: Postgres won’t accept non-localhost connections unless told to, so you need -c listen_addresses='*' as part of the run command.

Two: You must also edit pg_hba.conf, because the access rule lives there:
echo "host all all 0.0.0.0/0 md5" >>$PGDATA/pg_hba.conf

People will say this is “by design,” and good engineers — who yours truly is not — should know what HBA stands for to begin with.

But I genuinely do not understand why listen_addresses='*' can be passed on the command line while the corresponding 0.0.0.0/0 rule must live in a config file.

And honestly, I refuse to understand it.

If I were maintaining Postgres, the moment a handful of developers struggled with this, I’d push for a single flag enabling external access — settable via CLI, config, or env var — with clear warnings if multiple sources conflict, and autogenerated docs from a unified logic block.

Instead, Postgres shows the fingerprints of design-by-committee. Which accelerates enshittification, which gives everything that corporate/enterprise feel, and which ultimately is quietly sponsors bullshit jobs.

And given open source’s declining popularity, it may take Postgres decades to fade away. Big things tend to grow bigger and worse, pulling more people into their orbit. Postgres might well become the COBOL of the early 21st century.

Unless a few of us keep embracing the Unix Way — or its analogue in our respective fields. Keep individual things simple, and keep it simple to combine them in various intuitive ways. So that it is clear from the fist glance that every complex thing is first and foremost a collection of multiple self-contained simple things.

On the bright side: at least we can still find each other. And we need to keep doing so, because once enshittification crosses its point of no return, it’s almost impossible to pull anything back.

Hence this post. If you're pro The Unix Way and pro solid engineering, hold your ground. The ROI on keeping simple things simple remains insanely high. Let’s not lose it.
🔥31👍1
I spent some time today digging into TeamCity. This post is two disclaimers and one afterthought, with an open-ended question afterwards.

Disclaimer one. My research focused on using TeamCity as an orchestrator for reproducible workflows — both the runners infrastructure and the Web UI. This alone is a big use case, and in my situation it has very little to do with CI/CD. I simply want the team to be able to run certain [AI] workflows reproducibly, and to have a clean dashboard of all the runs. GitHub hooks or PR merge protection, while a major part of TeamCity’s offering, are explicitly not part of what I’m evaluating.

Disclaimer two. I tried multiple setups and eventually settled on Docker containers for TeamCity. Befriending the UI and the agent (“runner”) wasn’t trivial, but I got it working. Making the runner support uv for uv run pytest was slightly harder, but that’s solved too, by patching the agent's Dockerfile. I wrapped everything nicely into a compose topology run by a single script; might put the setup into a public repo over the weekend, but that’s beside the point. What matters is that all this is for educational purposes only; if we ever use something like this in production, we’ll buy a license. In other words: just because you can use publicly available containers for your workflows doesn’t mean you should.

Now the afterthought.

What I still don’t buy about TeamCity is that it insists on using its own storage and refuses to rely on repository contents as the true source of metadata.

Yes, there is a .teamcity/ directory in the repo. But I couldn’t make it fully work — and even if I could, my understanding is that it explicitly does not become the Source of Truth for TeamCity’s workflows.

What I would love to see is some .teamcity_sot/ option: an explicit “Source of Truth” directory inside the repo.

If it exists, .teamcity/ is ignored (or even disallowed), and no permanent workflows can be created from the UI.

You should still be able to create temporary, local, ephemeral workflows in the UI to design and debug them. The UI would then show the diff the user needs to commit into .teamcity_sot/ — so teammates can review changes like code, merge them, and only then the whole team sees the updated workflows in TeamCity.

TeamCity would, of course, continue handling runners, logs, history, secrets, triggers, hooks, and so on.

But the build definitions themselves should live next to the code — not inside TeamCity’s databases.

Over time, this naturally leads towards a unified way to define workflows. Repositories become agnostic to who runs their workflows: Github, Gitlab, TeamCity, an open-source runner, a custom engine, or even some Web3-first ecosystem. As long as the system has access to the repo and the right secrets/keys — and people are paying for compute in fiat or tokens — the workflows run.

In fact, having the private SSH key used to access the repo might be all one needs. With open standards for secrets and vaults, the same key used to push code becomes the only necessary and sufficient credential to run workflows — both for the repo and for whatever execution layer the team prefers.

And then we have the UI to run these workflows also defined as code. Won't that be neat?

Ah, and the TeamCity UX will then be a static single-page app. Ephemeral workflows can live in the browser's local storage, since the repo itself it the source of truth. All one needs to configure this single-page app is to connect it to the vault with secrets and to the constellation of runners; and both connections can follow straightforward self-contained open protocols, so that every component is fungible by design.

TeamCity is then just one engine among many, competing on UI/UX, usability, runner availability, and price.

Although, now that I’ve written this, I suspect such a direction would run against the very business model behind TeamCity. Well. A man can dream. A man can dream.
1👍1
Am I the only one who thinks this is borderline unethical advertisement?
Linux contributed to 6.3% of all desktop traffic, up quite a bit from 2024, at +22.4%. Chrome OS was next, with 2.4%, down –7.1% from last year.  

Year of Linux on Desktop!

Ask me where these numbers are from if you dare :-)
🔥6
Why Isn’t Erasure Coding Used More Broadly for Guaranteed Delivery?

I keep asking myself this question, and I still do not have a good answer.

Consider email or a typical messenger. Your computer — or an intermediary server — connects to a single server, sends the message, gets a confirmation, and considers the job done. Whether the message ultimately goes through is up to that service.

Yes, this works 99.99% of the time. But there is no true guarantee — and no real accountability. If a mail server delays or holds your message, you have no practical recourse. The protocol allows this ambiguity.

Blockchains offer a contrast. Transactions are confirmed by independent validators and visible via independent indexers. This works, but it comes with trade-offs. MEV attacks allow nodes to delay transactions to front-run others — technically allowed, but undesirable.

Here is the idea: use error-correcting codes to send messages or transactions directly at the client level.

Split a message into, say, 40 pieces, and add 10 redundant ones. Anyone with any 40 of the 50 pieces can reconstruct the original message. The message is encrypted, so servers cannot read it — only route it.

At the client level — browser, app, or terminal — the system randomly selects 50 nodes out of thousands.

This has three effects.

First, it removes the server bottleneck. Instead of sending one large message to one server, you send 1/40th of it — about 2.5% — to each of 50 nodes, across different networks, continents, and routes.

Second, interfering with message routing becomes far harder. Suppressing a message based on content or headers would require controlling at least 11 of the 50 randomly chosen nodes — highly unlikely with thousands of nodes active at any time.

Third, it introduces real accountability. The client receives cryptographic acknowledgements from all 50 nodes confirming receipt. If those nodes continue serving others while failing to route this message, that is unmistakable foul play.

Node operators could be required to stake funds as insurance. By signing a receipt, they attest that if they remain online and fail to route the message, the sender is entitled to compensation — automatically and within seconds.

In a Web3 setting, such a network can be economically self-sufficient. Users might deposit $10, maintain a $5 minimum balance, and pay $0.00001 per message. As the network grows and its token appreciates, early users may find their messaging effectively free.

Free — and secure.

Messages are encrypted. The system is on-chain, public keys are easy to distribute, enabling messages that are both signed and encrypted so only the intended recipient can read them.

“Only the intended recipient” can already mean complex conditions — multisig approvals, hardware-backed keys, or social consensus. A board decision might require N members to sign, or company funds released only after M treasury holders approve.

This is mathematically sound, physically executable, and beneficial to everyone — except those who profit from the current lack of accountability. We all want our services to behave this way, right?

One last point.

In this design, all Terms of Service are final. You will never receive an email saying they changed “because we care about our users”, or that you "must accept the new terms to continue using the service".

What you agreed to is guaranteed forever — unless the provider shuts down entirely, in which case compensation can be paid from an escrow account on another network. That compensation should exceed what the user paid — reasonable mathematically, since routing messages is really, really not that hard.

And if the provider wants to upgrade, they must ask. You choose whether to migrate. There is no technical way to force you — and that feels like the right default.

Aside from the overall “it’s already good enough” sentiment, and aside from large players aligning ever more closely with increasingly controlling regulators, what exactly prevents us from making the above a reality in, say, ten years?
I’m still trying to wrap my head around the economic and moral aspects of taxation when it comes to paying for online-first products.

These days, this most often means AI agents. Here’s a completely hypothetical example — “asking for a friend.”

A user is working in an IDE with an AI assistant and starts running close to their usage quota. They’re happy with the service and want to pay the provider more money to get additional help from this AI assistant.

Let’s say we’re talking about $10, just to keep things simple.

What the user is ultimately paying for is a SaaS offering. Somewhere, there is hardware hosted in a datacenter performing various tasks — mostly GPU inference, plus orchestration and supporting infrastructure. The service provider will keep a substantial portion of those $10 as profit.

From the user’s perspective, no one really cares where this hardware is physically located. There may be regulatory constraints — especially if the code is private or sensitive — but those concerns fall on the service provider. We’re talking about an individual user.

And let’s assume, for simplicity, that the code is open source, and the author is streaming their work 24/7, making their prompts and development process fully public domain.

At the end of the day, this is just a human being willing to pay $10 for a service that another entity is willing to provide.

I want to calibrate my thinking here on moral grounds.

Is it reasonable to charge this person sales tax based on their physical location?

Is it reasonable to question which entity ultimately receives this money, especially if the $10 is reimbursed by some corporation?

Who should be liable if it turns out that the entity the user paid is hosting its service in a sanctioned region? What if the user didn’t know? To what degree should they be responsible for knowing?

What is morally or ethically wrong if the user is behind a VPN? What if it’s a corporate VPN they are required to use in order to contribute to a particular codebase, and that VPN terminates in a country with no sales tax?

Can the government pursue the user or their umbrella company if the transaction is effectively a barter? For example, suppose the AI assistant provider opens a “token credit line” for $10,000 worth of usage, provided “for free,” as long as the developer allows that company to use the paid version of the very service they are building — also “for free.”

I’m trying to morally map the regulatory landscape as it exists today. Clearly, we don’t want people intentionally paying $10 to exploitative organizations just to save $0.50 in taxes.

But wouldn’t it be more moral to agree that, since the service can be provided from anywhere, it should not be subject to additional taxes at all? As in: use any form of payment you like, support a local vendor if you want, and we’ll do our best to make local support more attractive — rather than making it harder for people to optimize for cost effectiveness.

Something like negative taxation, even. If you’re willing to tolerate an extra ~200 ms of latency by accessing a datacenter farther away — perhaps somewhere sunny, where energy is effectively free — then the operator saves $2 out of those $10. Of those $2, $1 becomes additional operator profit, and the remaining $1 becomes a discount for the user.

I’m genuinely struggling to understand what exactly we are paying for, and how this is justified from a moral perspective. I’m not against taxes per se — I’m just strongly in favor of accountability, and of optimizing for effective resource utilization.

And introducing a sales tax on an online service that can be provided from virtually anywhere on or near Earth does not fit that model — unless I’m missing something important, in which case I’d very much like to be educated.
Yesterday I learned about:

git update-server-info
python3 -m http.server 8888

This makes your git repository clone-able from https://localhost:8888/.git

Comparing this to the rest of the industry, such as FlatBuffers ...

Error.

Vectors of unions are not yet supported in at least one of the specified programming languages.

This is a hard FlatBuffers limitation, not a tooling or version issue.


... I'd say the Unix way and the Linus way do have the potential to go places.

Rant: It's still giving me nightmares that we're not living in the world where a .wasm file is stored in the browser's Local Storage, updated on the go if the version has changed, and then the browser-side JavaScript can just natively import it and call functions from it, with no hundred-lines-long aux code. We indeed have moved away from the true path of software engineering some time 10+ years ago.
👍2😢1
[ Returning the rental BMW X2 ]

— Do you like the car?

— Meh. Literally the worst user experience interface I’ve seen in years. Getting wireless Car Play to work is a pain in the butt, and just using the cable does not work.

— I don’t know what you’re talking about, sir, but many customers have this complaint.

Well. BMW, I hope you are listening. Because this literally is the worst UX I’ve seen in years.
😁4😱1
Yes: Scan this QR code to pay this bill with Apple Pay.

But: “Type in the table number or check number to continue.”

Folks, you do know QR stands for Quick Response, right? It’s kinda a crime against humanity to not encode this very check number in the QR code printed on the very check, if you ask me.
😁71
Unpopular opinion: I'm starting to respect Yaml.

First, it's JSON-compatible, as in every JSON is a valid Yaml. Which means anything inside a Yaml doc can just be a JSON, literally copy-pasted inside. And which means everything that accepts a Yaml will by extension accept a JSON.

Second, it supports comments and stuff.

Third, I love jq and I instinctively typed in yq once — and it did exactly what I expected it to do. Moreover, yq -o json, or just yq -oj, will make it output JSON, nicely formatted, and colored just slightly differently enough to see it's not jq.

Furthermore, yq -P pretty-prints any Yaml, which by extension includes any JSON. It's just more human-readable, with no extra lines for closing } and ], and yet it's 100% machine-readable. Even package.json reads better after | yq -P.

In Python, yaml.safe_load would load the Yaml doc just like json.loads loads the JSON. All the more reasons to keep BaseModel-validated configs Yaml-s, not JSON-s. They are, after all, backwards-compatible.

Finally, there are Yaml Document Streams, which are just better than my now-second-favorite one-JSON-per-line, JSONL, format. I'd definitely prefer it when human-readability is part of the requirements, or at least a nice-to-have.
👍1
I got curious recently. With developed countries — the UK among them — tightening laws around VPN usage, how does this actually work for employees of overseas corporations who are required to use a corporate VPN to access company resources?

Surprisingly, this is hard to research. Most online answers try to solve a different problem entirely: whether employers can track where employees log in from. That is not my question.

I am not trying to trick employers. Quite the opposite — I want employers to give employees the freedom to use the Internet as it was intended.

Consider a simple scenario. Someone travels to the UK frequently, but works for a company registered in, say, the Cayman Islands. Per their contract, during business hours they are expected to spend several hours connected to a corporate VPN terminating in Cayman.

Now add a policy amendment. The company:

∙ does not keep VPN logs, and

∙ explicitly encourages employees to use the corporate VPN whenever not doing so could put company business at risk.

During orientation — which, naturally, happens in Cayman! — this is explained plainly. There may be content that is legal in Cayman but problematic when accessed while traveling in the UK. The company wants its employees safe, comfortable, and able to do their jobs without unnecessary exposure.

So the guidance is simple: if you are unsure, use the corporate VPN. The cost is negligible. The risk reduction is not. Better that traffic stays private than visible to hotel staff, local ISPs, or anyone else who does not need to see it.

Employees comply. They use corporate hardware. They use the corporate VPN — as required. From the UK ISP’s perspective, they are simply connected to a Cayman endpoint. Work traffic, personal email, private messages during natural breaks in the workday — all indistinguishable.

So where is the catch?

To be clear, I am not endorsing using VPNs to break laws. This is a thought experiment. If someone connects to a VPN specifically to access content they are forbidden to access locally, that is not defensible. But that is not what this scenario is about.

What, then, is the status quo?

Will the UK refuse to allow people to connect to corporate VPNs unless those VPNs provide government backdoors? Will it make it illegal for foreign companies to operate in the UK without traffic inspection capabilities?

I am trying to understand where the line is supposed to be between:

∙ protecting traffic for legitimate reasons — corporate security, privacy, risk management, and

∙ protecting traffic for questionable reasons — accessing things one should not.

These two are technically indistinguishable.

No country is trying to stop visitors from China from reading Wikipedia. China may disagree, and China may want to enforce its own rules later — that is a separate issue. But my hypothetical runs in the opposite direction. Cayman Islands is a reputable jurisdiction that happens to trust its people to know what not to look for online.

So what is the right moral compass here? And more importantly — where do we expect this to go over the next few years?

Because the Internet does not recognize borders. But laws increasingly pretend that it does.

PS: I do not know whether Cayman Islands allow online adult content. But my hypothetical argument should hold regardless.
Looks like my most valuable software development & architecture skill of the past ~ten years is indeed only getting more valuable.

I love producing small, clean, self-contained examples. To understand various concepts better, to explain them better, and to ultimately pick which ones to use and which one to ditch.

And this skill is very, very well aligned with AI-assisted coding!

Because the AI can hack up most simple examples well, and it can tweak them almost perfectly and almost instantly. What it lacks is the sense of beauty.

Both in clarity — is the code aesthetically pleasing to read? And in durability — if we introduce this code to a team of fellow humans, will it proliferate through the codebase in a good way, or will it grow like a bad tumor?

Perhaps in 5+ years my full-time job will be trying out various patterns with and without AI, and labeling them — manually, with experts, with the general public, and with, well, other AIs.

And then maybe people like me will be designing programming languages for the 21st century — because we're long overdue.
🔥42👍2