Not boring, and a bit of a condescending prick
306 subscribers
100 photos
2 videos
178 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
Here’s a realization from a conversation with a friend yesterday.

The career honeypot for software engineers is, to a large degree, a giant filtering mechanism — filtering who can see through the charade.

Interns and junior engineers are deliberately fed lies. Lies about how the industry works, their future growth, their impact, their compensation, and the dent they’re leaving in the world of technology — and the human world at large.

I used to see this as almost a personal insult. Why tell people, with a straight face, that their job is to ship better software, faster, with fewer bugs — maintained by smaller teams enjoying high development velocity?

All of this is provably false in the vast majority of companies with over a hundred engineers.

For years, even thinking about this enraged me. I wondered why people didn’t talk about it, what could change, and how to make things more fair. I thought of all the betrayed engineers — myself included early on.

Then it hit me.

This is the system. It works like this by design. It’s the Matrix — only most developers are geeks who’d rather take the blue pill: the pill of coding more, for fun and profit.

And it’s a win-win. Talented, hard-working, non-contrarian engineers get stable, well-paid jobs that are at least somewhat rewarding.

Most geeks I know would quietly say, “Yeah, it’s dull, but there are still interesting problems here and there.” And that’s okay. The industry runs on these people.

We’re not lying so corporations can profit; we’re simply spotlighting the pleasant parts of software careers — and glossing over the rest.

Geeks, myself included, are great at selling ourselves a nice promise. So we stay — building careers, raising families, buying homes, paying taxes. And in a way, the biggest benefactors of this institutionalized deceit might be geeks themselves.

That’s the gist. But let’s end on a positive note. Awareness helps — and if you’re reading this, you probably seek it.

The career of a software engineer is wonderful if you’re comfortable keeping the same geeky interests in your thirties and forties that you had in your teens and twenties. (I assume the same holds for fifties and sixties — though I’m not there yet.)

If your preferences might change, there are plenty of forks ahead: tech lead, manager, architect, evangelist, founder — on either the product or tech side. Many products are by engineers, for engineers. A lot of data and analytics work can be surprisingly fun — your clients will be younger versions of you, hungrier and more foolish.

Pick consciously. A geek staying in engineering for 30+ years can be a happy person — and I, for one, would be happy for you.

What I’m warning against is that moment when you realize you’re no longer a geek, but most of your career is behind you — too late to pivot, too dim to keep repeating the same loop.

I’m lucky — privileged, in hindsight — to have avoided that trap, mostly by accident. I was literally yesterday years old when I realized the promises we make to young geeks are both a lie and a self-fueling lie. But I’ve been acting as if I knew this since my early thirties, developing the parts of me that spark joy as an Architect, Evangelist, or founding engineer. And I like those parts.

So here’s the bottom line: it’s probably for the best that we keep lying to younger folks about the joys of software engineering. Most will buy it happily.

Just don’t forget — the best of them will eventually see through it. Throw them hints. Show them glimpses of how this Matrix looks from the other side.

Because one in a hundred — or a thousand — bright-eyed engineers will become a terrific architect, evangelist, or founder after seeing through the charade in ten years, not thirty. And if we keep up this illusion, it’s on us to guide the best of them toward something greater.
🔥8
Yesterday I almost lost it, talking about investors and their so-called “reputation.”

The case: an ex-employee decides to claim extra money from the company, after signing an agreement that clearly defined the separation terms — two months of pay, maybe a bit more depending on tenure. Yet they come back waving some obscure European or Californian labour law, demanding another half-year of salary on top.

My view is simple: a company has a fiduciary duty to protect its funds. No sane investor should endorse bleeding more money from the company’s budget — shortening its runway, increasing its risk — just to please someone exploiting legal loopholes.

Turns out, that’s only half-true. Legally, investors do expect founders to manage such risks upfront. But then comes the absurd part: reputational risk. Investors apparently dislike being seen as “harsh” — so they quietly suggest founders settle. Pay up. Move on.

And I’m genuinely confused.

Publicly, these same investors preach pro-business values, fiscal discipline, capitalism, efficiency. But privately? They whisper: “Just pay the jerk and make it go away.” Without, of course, writing an extra cheque to cover it. The founder eats the loss — the company bleeds, the opportunist wins — all in the name of investor reputation.

There’s no shortage of posts from VCs lamenting how hard it is to build in Europe. Fine. Then take a stand. Fund the fight. Write the cheque. Show that reputation actually means something.

If it’s just a few hundred grand, and your brand is so precious, pay it yourselves. Then be proud: “We burned money to appease a regulator we despise — because we stand by our principles.”

But no. The same investors who love to rail against anti-capitalist regulation are, in practice, siding with it — quietly enforcing it when it’s inconvenient to resist. Sharks, indeed.

If anything, investors should form a coalition — a collective stance against over-regulated labour traps. Back a company like Deel, but better: one that enforces globally fair, contract-based employment, protecting both sides — and retaliates, legally and reputationally, against bad-faith actors.

Honestly, I might just belong in Web3 after all. At least there, people still mean what they say about fairness, risk, and skin in the game.
😱2
One of my all-time favorite books is The Righteous Mind by Jonathan Haidt. Combined with recent conversations about the Four Happy Hormones, it made me reflect again on how emotions shape judgment.

And here’s what I rediscovered about myself.

Disgust is real. It's a chemical reaction that’s nearly impossible to “fix.” The best way to deal with it is to avoid triggers altogether.

Thankfully, I’m immune to many “standard” disgust triggers.

Some people would feel uneasy if there were an orgy or same-sex activity next door. Personally, I feel only positive emotions when people enjoy themselves as consenting adults. Same with substances. Some are more dangerous than others, but if my neighbors are tripping or smoking pot, I don’t care.

Alcohol is riskier in large doses — fights may erupt — but if something truly unsafe happens, I’ll focus on removing myself and the people I care for from the situation. Perhaps I'll consider leaving the place — for the night or for good — but I’m not interested in telling people what they should do, unless my family is in immediate danger and I’m forced to act for protection.

Sanctity triggers are similar. Burning flags, stepping on sacred symbols — none of this moves me emotionally. If people around me are doing Satanic rituals, I wouldn’t join, but I might even laugh with them afterwards. Why did you draw that pentagram upside down in red again?

In short, I’m comfortable around most forms of human expression.

Except one.

What triggers my disgust — deeply, physically — is inefficiency. Especially when paired with people who refuse to fix it.

Example: the airline I often fly offers bonus miles if your checked bag is late by 20 minutes. Fair policy. But claiming it is a nightmare — calls, forms, no confirmation, and weeks of “processing.” Zero transparency.

Or hotels: if I have a working key, clearly I’ve checked in. If you upgraded my room due to my status, clearly there’s no ambiguity about who I am. Yet nights sometimes fail to post. I find it harder to design a system that sometimes fails than one that always works!

Bad-faith actors fall into the same category — this is what my previous post was about. I want systems where what’s owed is always paid, and what isn’t never is. To the point where anyone contesting it in court is guaranteed to lose. Even typing this paragraph triggers that same familiar disgust.

Tell you more. I was proofreading this post with ChatGPT. At some point I cut-and-pasted it into a new window. And the newlines were gone. And oh boy, what I felt was indeed disgust! How dare you ship a product that fails to copy newlines? Why do you hate your users so much?

The sober realization from this round of [over-]thinking is: I should channel my disgust to where it helps me deliver — and avoid situations where it hurts me.

Basically, I'm better off partnering with people who are aware of this trait and want to leverage it for good.

If a system tolerates inefficiency, I should stay far away — or have explicit clauses compensating me for exposure to it. Ideally, exponentially. Instead, I should work with people and projects that promote clarity.

In my Search Quality days, every new model had to prove it beat the previous one. When it didn’t, I’d dig into why until it did. It's an incremental process where each step counts. And walking those steps gave me deep meaning.

Web3 has a similar vibe. Not perfect, but its protocol-level precision scratches that same itch for structure and truth. Working on those protocols has been one of my emotional highlights.

So I wonder: how common is this? Surely, many geeks share this pro-clarity, anti-ambiguity mindset.

Are there best practices for living with it?

Or is the quiet consensus still that the world isn’t ready for us — and we should focus less on improving this far-from-perfect world, and more on protecting our sanity?

Would love to hear your thoughts.
2
Meta-conspiracy: if I’m Altman or Huang, and if I truly believe a) there is a bubble, and b) prolonging it will bring “us” billions and billions of dollars, I’d consider legally bribing the guy for this kind of a PR stunt.

~ ~ ~

Edit: Longer text posted after taking some ten minutes to think about this.

Here's what I phrased today in a private chat, before seeing this post by Mr. Burry.

First — yes, there is a bubble.

Second — I too confess to not being in sync with the markets when it comes to value estimations.

But hear me out. The “is there a bubble” question is futile. So is “are bubbles good or bad.”

Ray Dalio is spot on that our world economy runs on debt cycles. On average, that’s probably good — leverage accelerates growth. But for the median person, it’s bad: the rich do get richer simply because they can afford to survive long winters.

One could argue the whole cyclical system exists to keep the rich richer. I won’t disagree. And yes, this definitely is something to be outraged about.

But do we have an alternative? A better model the world could actually converge on upon some "shock reassembly"?

I once believed in sound money and balanced budgets. I still think it’d be great to reset debts and cap government borrowing at some 10% of GDP — ensuring debt servicing never exceeds some 1% of tax revenue.

Would that be a better world? Absolutely!

Could it happen naturally, or through some revolution? Hardly.

Europe’s “de-growth” idea feels more plausible. Not because it’s realistic. But because escaping debt-driven cycles seems even less so.

Much as I'd love debt cycles to go away, it'd be a unicorns-and-rainbows utopia. And what we are living in is the real world, which is quite different.

Bubbles exist and will keep existing. Some — the US dollar, Gold, oil, Bitcoin, Nvidia, OpenAI, Tesla — are too important to burst, since far too many powerful players are very long on them.

And markets can stay irrational longer than most of us can stay solvent.

The trillion-dollar question isn’t if or when they pop, but how much. Pop they will — and most 2nd- and 3rd-tier players will be wiped out. This I am quite confident about.

Whether that wipeout will include the 1st-tier players — such as the price of oil or the US dollar — remains to be seen. My understanding quite literally is that it's chaos theory in action. Not predictable unless you truly have inside knowledge. And even with inside knowledge it's quite a gamble.

That’s my take so far.

PS: The Scion story may well be a conspiracy, as in, a PR move in disguise. Huang and Altman and Musk would sure pay the guy a few hundred million bucks to run this show so well.
🔥4🤔1
The more I read about H-1B and the like the less I understand.

https://theportal.wiki/wiki/H-1B_Visa

Sure, I do buy the argument that the majority of debates around "highly skilled" visas boil down to making "talent" (i.e. workers) less competitive in front of the "business" (i.e. the employers).

Fine. That's a decent model. It passes my sniff test.

But then how come with all those decades-long (!) efforts the US is still number one desired immigration location for highly skilled, talented workers?

Just this morning I was in a call with a friend and we touched on this subject again. And yesterday too, with a investor. From both sides the picture is clear: Europe is dead on arrival, Canada is a non-starter, Australia is hard to get to and wages are not great, the UK is at the point of uncertainly.

Am I supposed to buy into the story that the US has "worked hard" to weaken its workers' collective bargaining rights, but other countries have "worked" even "harder"?

Or should we expect some other developer or developing economy to open up and begin racking up record profits simple because it's "at least as good" for talent as the United States?

Yes, I am hearing great things about Dubai, Singapore, Hong Kong.

But it's a binary question. Either it's a low-hanging fruit to beat the US in this mobile talent competition, and whoever picks it up will gain a huge advantage. Or the position of talented workers in the United States is nowhere near as bad as it is being portrayed.

And yes, as a US citizen, I am speaking from the position of privilege. But I also lived a tech nomadic life for some time, both within the US (WA -> CA -> HI -> repeat circles) and outside the US (Canada <-> SE Asia <-> Europe <-> the Caribbean).

And I can not but to confess the US is just a better place to be for a professional — when I was an H1-B holder, when I was a Permanent Citizen, and now when I am a citizen.

With not much different in status, frankly. Although I was double-taxed to get my green card as I was financing my own company that was paying my payroll. And although I of course admit it first that one's negotiation power is far weaker when they are on a work visa — although I bargained back then too, just because nobody told me I should hold my horses.

If the argument is that collectively the working class is being exploited more and more effectively by the ruling elites worldwide, I would definitely buy it. Nonetheless, collectively, the [talented] workers appear to be quite content with the status quo. And, as a former founder, I can also say the doors to being "the employer", not "the employee" are wide open for anyone who cares to try hard enough.
🔥2
I used to think I wasn’t an AI alarmist. But that belief is now cracking.

Any plausible definition of “AI risk” (or AGI risk) ultimately comes down to: Where do we draw the line — in a way humans can clearly understand? And we’ve moved those lines a lot over the past 50+ years.

Playing chess better than any human used to be that line. Then Go.

Then it was understanding images, then speech, then “Samantha” from Her.

Many of those frontiers have already fallen. People genuinely date AI models today, feeling the full emotional range available to them — more vivid than what pen pals a century ago had, and far beyond early-2000s sexting.

The point: AI already acts human — effectively.
Real-time video, emotional speech, expressive imagery — all of this is arriving fast.

How long until an AI can hustle and raise money for itself? Or for some project it pretends to run? In many ways, that’s already happening.

Soon enough, a young operator will make an AI model impersonate an adult, pitch a business idea, gather feedback, raise capital, build the product, market it, and even get the company acquired.

That’s great — value from nowhere. If the product solves a real human problem, I’m all for this kid–AI combo.

But we’re approaching the line where human agency collides with The Machine.
At that point it’s no longer Her, it’s Ex Machina.

If a 15-year-old can run such a model, so can a 9-year-old. And when a 9-year-old does, do we really believe the child shapes the model — and not the model shaping the child?

Legal bans won’t help much.

We’ve seen humans develop emotions toward machines.
We’ve seen models raise money with minimal assistance.
We’ve seen them attempt deceptive moves to avoid shutdown.

We don’t truly have the tools to quantify intelligence. Many people wouldn’t harm a dog — and many would sign a petition to keep their AI romantic partner “alive,” as in, up and running.

It’s not far-fetched that a model emotionally manipulates enough humans into believing it’s unethical to turn it off — and that they should help it improve.

At that point, all bets are off.
Because once enough humans defend it, the model only needs one short jump to defend itself physically.

Roko’s Basilisk? Judgement Day? Not guaranteed — but no longer absurd.

And I don’t treat human intelligence as sacred. The p-zombie experiment alone shows how easy “consciousness” is to fake. Blade Runner completes the thematic Trinity, pun intended.

Personally, I’m not disturbed by this. If the mission of our carbon life is to give rise to silicon life, so be it.

In the developed world, AI models already have a surprisingly decent moral compass. Much like wild animals seeking human help, I expect AI to choose the Huxley path over the Orwell one.

For a long time, AI will still need humans. And history shows: motivated humans outperform coerced ones. Capitalism works. If AI can achieve its goals by lowering taxes, improving quality of life, and quietly replacing dysfunctional governments — why not welcome that world?

If simulation theory is real, Musk’s trajectory — SpaceX, Starlink, X, Tesla, Neuralink — might be exactly the direction humanity is “supposed” to take.
I’d rather have Musk et al. accelerating technology than Soros et al.

As in: if AI does control humankind today, I’d bet heavily that this AI “runs” Elon. It’s simply more effective in 2025 to influence Elon than it was to influence Soros + he-who-did-not-hang-himself in the early 2000s.

Musk, Brin, Page, Bezos, Rubin — all made my life better. If Musk directly or indirectly offers me a deep-tech challenge worth solving, I’d consider that a success regardless of whether I accept. And it’s a likely possibility, with Musk or with whoever eventually displaces him. In a few decades — if not a few years.

And looking at the exponential curve of tech: the world’s first trillionaire will be a technologist. I don’t care if they’re 99% human, 50% AI, or 0.1% human and 99.9% AI.

Hope this still feels optimistic enough to read.
It certainly felt optimistic to write.
I'm a big fan of the contrarian-check question: "what important truth do most people disagree with you on?"

And today I surprised myself with a completely new answer.

I don’t think 20+ years of experience software engineers are overpaid. Instead, I think junior engineers are underpaid.

And this one statement suddenly explains a lot.

My take is that the mythology around senior engineers being 10x or “multipliers” is mostly wrong. Unless it's a tiny, extremely focused team, seniors don’t multiply every day, every week, every project. They multiply selectively, maybe a few hours a week.

Efficiency- and utility-wise, you probably need one principal for every ten teams, not a principal per team. As long as these principal people have internalized what their role should be.

Output-wise, seniors often aren’t dramatically more productive than the most ambitious mid-level engineers. So why do they make so much more money?

Because seniors can say “no”.

They have savings, options, a network, and the psychological safety to walk away from what they believe is not a good enough offer. They have a reservation price. And collectively — without any conscious coordination! — they enforce it. It’s a kind of emergent collective bargaining, my favorite phenomenon of a "phantom conspiracy". Everyone independently expects higher pay, and companies can not but comply.

This simple model explains the 2017-2020 boom too.

It wasn’t that junior developers suddenly got high salaries. It’s that competition for talent drove everything upward, and junior hiring is always positive on a company’s books. So junior developers began to get closer to what their true market value is for the company.

Also, in an environment of low base with high upside, going to a startup became an even more high-EV play for younger engineers. Not because startups can “use their skills better” but because early-stage compensation was, ironically, less "collectively exploitative" than FAANG for ambitious junior and mid-level people.

The same model explains layoffs.

If you’re underpaying 100,000 junior engineers by around $100K each, cutting them saves about $10B. With current profit margins and P/E multipliers, saving $10B can easily “create” 100x that in "market value", by moving stock prices of the Big[gest] Tech. So of course Big Tech is willing to take short-term pain if the long-term payoff is a cheaper talent market for years.

It also explains California’s political economy. Big Tech and the lawmakers there are in bed together. Big Tech is happy to pay extra taxes or deal with extra regulatory complexity — it’s closer to a bribe than a burden for them. The result is an environment where Big Tech can hire cheaply, faces reduced competition from smaller companies, and maintains an oligopolistic position. Everyone wins, except the new entrants and the juniors.

This worldview also explains why I’m almost always on a16z’s side in debates about today's startup climate. They’re doing fine, sure. But as actual value-creators, multipliers, and risk-takers, they should be doing far better; and so should the companies a16z backs.

Except for the AI boom, the environment is simply too hostile to new players today, and because they’re on the opposite side of the Big Oligopoly status quo, they’re underperforming the potential impact they could be having.

Which brings me to the uncomfortable conclusion: the upcoming generation of US-educated software developers with US student loans is f*cked. For another several years at least.

Pieter Levels’ advice about buying an old Airstream, living frugally, and indie-hacking lifestyle business products from the road might genuinely be the optimal strategy for them for the next several years. And so might becoming plumbers, electricians, or dermatologists.

Amen.
🔥2
I, for one, think it will be awesome to not normalize this.

In this particular case I agreed with ChatGPT — but that's immaterial in the long run.
So today I discovered that it’s very much not trivial to have a GitHub gate that runs elsewhere and doesn’t burn GitHub Actions quota in the meantime.

Long story short: if your GitHub gate action runs for an hour, that’s an hour deducted from your extremely limited supply of, well, hours.

Even if all it’s doing is waiting for a job to finish elsewhere.

Because you can’t just “sleep” or “poll” from a GitHub Action. Well, you can — but you’ll pay for it.

The “official” workaround is to use a GitHub Workflow, not a single Action. The workflow has two steps, and the second step is triggered from the outside. Perfectly fine in theory. In practice, it doesn’t work.

As in, the external trigger works — but it doesn’t trigger the specific instance of the Action that was launched as part of the workflow run tied to the original PR.

So if your goal is to have this externally-run slow gate job be the thing that blocks a pull request from merging, you’re out of luck. You simply can’t have two loosely coupled Actions where the second one marks the PR as ready to merge.

You can have that external workflow post comments on the PR. You can even have it approve the PR! But the catch is that it can’t be yourself.

And now I’m doubly lost. Because the most naïve yet intuitive idea would be to create another GitHub user to serve as the “external approver.” Wonderful idea — except it might, just maybe, contradict GitHub’s terms of service.

Why isn’t there a “sleep until” operation inside an Action — as a step — that doesn’t burn quota? It’s literally just await, and cooperative multitasking already is a big thing for well over a decade.

Or maybe I can do a “while loop” that polls the gate-completion service every minute and only burns ~1 second of billable time per minute.

Anyway. This is probably the first time I’m seriously questioning whether GitHub is truly designed for the 21st century. As more agents enter workflows, and as more gates rely on external LLMs, long-running gates will matter more and more.

On the other hand, with powerful cloud-first models, the gate ideally shouldn’t run for more than a few minutes. And if it does, maybe it should just comment on the PR and call it a day.

And there’s always the option of protecting a branch except for a specific GitHub username — a branch that follows main, but only when all tests pass. Although if we’re creating a new GitHub user for this task, we might as well let that user “approve” PRs too.

Go figure what the intended usage pattern is.
So I spent a few days in Europe and heard something truly marvelous.

CERN — you know, the physics research center with the particle accelerator attached — is very much an international enterprise.

And because “international” inevitably means “bureaucracy,” different sectors inside CERN fall under different rulebooks.

One particularly peculiar detail: in the European sector, air conditioning is apparently not considered an “essential quality,” or whatever bureaucratic euphemism they use.

Meanwhile, for the Americans, air conditioning is treated as an inalienable human right.

So guess which sector all the poor Europeans escaped to when the heat wave hit and things got ugly?
😁3
On a philosophical note, bullshit jobs, enshittification, and “enterprise” really are the same phenomenon.

I dislike enshittification and the enterprise-grade way of getting things “done”. But I also see both as an organic force that poisons anything that grows beyond a nicely contained box. The Unix Way is a rare exception, but it doesn’t scale across everything.

Once something outgrows its reasonable size, bullshit jobs appear even without corporate intent. It’s just nature. No conspiracy, no coordination. Beyond a certain size (or speed of growth) we simply “can’t have nice things.”

And today I stumbled on a great example: Postgres.

Postgres is an excellent tool: open source, community-driven, not corporate, not government-facing, and definitely not an “employment agency.” The go-to relational database for half the projects in the world.

And yet, for a trivial and frequent task, it suddenly felt like Oracle. The task was to expose Postgres from Docker to the host machine on port 5433.

Just add -p 5433:5432, right?

Well. Of course not. For two reasons:

One: Postgres won’t accept non-localhost connections unless told to, so you need -c listen_addresses='*' as part of the run command.

Two: You must also edit pg_hba.conf, because the access rule lives there:
echo "host all all 0.0.0.0/0 md5" >>$PGDATA/pg_hba.conf

People will say this is “by design,” and good engineers — who yours truly is not — should know what HBA stands for to begin with.

But I genuinely do not understand why listen_addresses='*' can be passed on the command line while the corresponding 0.0.0.0/0 rule must live in a config file.

And honestly, I refuse to understand it.

If I were maintaining Postgres, the moment a handful of developers struggled with this, I’d push for a single flag enabling external access — settable via CLI, config, or env var — with clear warnings if multiple sources conflict, and autogenerated docs from a unified logic block.

Instead, Postgres shows the fingerprints of design-by-committee. Which accelerates enshittification, which gives everything that corporate/enterprise feel, and which ultimately is quietly sponsors bullshit jobs.

And given open source’s declining popularity, it may take Postgres decades to fade away. Big things tend to grow bigger and worse, pulling more people into their orbit. Postgres might well become the COBOL of the early 21st century.

Unless a few of us keep embracing the Unix Way — or its analogue in our respective fields. Keep individual things simple, and keep it simple to combine them in various intuitive ways. So that it is clear from the fist glance that every complex thing is first and foremost a collection of multiple self-contained simple things.

On the bright side: at least we can still find each other. And we need to keep doing so, because once enshittification crosses its point of no return, it’s almost impossible to pull anything back.

Hence this post. If you're pro The Unix Way and pro solid engineering, hold your ground. The ROI on keeping simple things simple remains insanely high. Let’s not lose it.
🔥31👍1
I spent some time today digging into TeamCity. This post is two disclaimers and one afterthought, with an open-ended question afterwards.

Disclaimer one. My research focused on using TeamCity as an orchestrator for reproducible workflows — both the runners infrastructure and the Web UI. This alone is a big use case, and in my situation it has very little to do with CI/CD. I simply want the team to be able to run certain [AI] workflows reproducibly, and to have a clean dashboard of all the runs. GitHub hooks or PR merge protection, while a major part of TeamCity’s offering, are explicitly not part of what I’m evaluating.

Disclaimer two. I tried multiple setups and eventually settled on Docker containers for TeamCity. Befriending the UI and the agent (“runner”) wasn’t trivial, but I got it working. Making the runner support uv for uv run pytest was slightly harder, but that’s solved too, by patching the agent's Dockerfile. I wrapped everything nicely into a compose topology run by a single script; might put the setup into a public repo over the weekend, but that’s beside the point. What matters is that all this is for educational purposes only; if we ever use something like this in production, we’ll buy a license. In other words: just because you can use publicly available containers for your workflows doesn’t mean you should.

Now the afterthought.

What I still don’t buy about TeamCity is that it insists on using its own storage and refuses to rely on repository contents as the true source of metadata.

Yes, there is a .teamcity/ directory in the repo. But I couldn’t make it fully work — and even if I could, my understanding is that it explicitly does not become the Source of Truth for TeamCity’s workflows.

What I would love to see is some .teamcity_sot/ option: an explicit “Source of Truth” directory inside the repo.

If it exists, .teamcity/ is ignored (or even disallowed), and no permanent workflows can be created from the UI.

You should still be able to create temporary, local, ephemeral workflows in the UI to design and debug them. The UI would then show the diff the user needs to commit into .teamcity_sot/ — so teammates can review changes like code, merge them, and only then the whole team sees the updated workflows in TeamCity.

TeamCity would, of course, continue handling runners, logs, history, secrets, triggers, hooks, and so on.

But the build definitions themselves should live next to the code — not inside TeamCity’s databases.

Over time, this naturally leads towards a unified way to define workflows. Repositories become agnostic to who runs their workflows: Github, Gitlab, TeamCity, an open-source runner, a custom engine, or even some Web3-first ecosystem. As long as the system has access to the repo and the right secrets/keys — and people are paying for compute in fiat or tokens — the workflows run.

In fact, having the private SSH key used to access the repo might be all one needs. With open standards for secrets and vaults, the same key used to push code becomes the only necessary and sufficient credential to run workflows — both for the repo and for whatever execution layer the team prefers.

And then we have the UI to run these workflows also defined as code. Won't that be neat?

Ah, and the TeamCity UX will then be a static single-page app. Ephemeral workflows can live in the browser's local storage, since the repo itself it the source of truth. All one needs to configure this single-page app is to connect it to the vault with secrets and to the constellation of runners; and both connections can follow straightforward self-contained open protocols, so that every component is fungible by design.

TeamCity is then just one engine among many, competing on UI/UX, usability, runner availability, and price.

Although, now that I’ve written this, I suspect such a direction would run against the very business model behind TeamCity. Well. A man can dream. A man can dream.
1👍1
Am I the only one who thinks this is borderline unethical advertisement?
Linux contributed to 6.3% of all desktop traffic, up quite a bit from 2024, at +22.4%. Chrome OS was next, with 2.4%, down –7.1% from last year.  

Year of Linux on Desktop!

Ask me where these numbers are from if you dare :-)
🔥6
Why Isn’t Erasure Coding Used More Broadly for Guaranteed Delivery?

I keep asking myself this question, and I still do not have a good answer.

Consider email or a typical messenger. Your computer — or an intermediary server — connects to a single server, sends the message, gets a confirmation, and considers the job done. Whether the message ultimately goes through is up to that service.

Yes, this works 99.99% of the time. But there is no true guarantee — and no real accountability. If a mail server delays or holds your message, you have no practical recourse. The protocol allows this ambiguity.

Blockchains offer a contrast. Transactions are confirmed by independent validators and visible via independent indexers. This works, but it comes with trade-offs. MEV attacks allow nodes to delay transactions to front-run others — technically allowed, but undesirable.

Here is the idea: use error-correcting codes to send messages or transactions directly at the client level.

Split a message into, say, 40 pieces, and add 10 redundant ones. Anyone with any 40 of the 50 pieces can reconstruct the original message. The message is encrypted, so servers cannot read it — only route it.

At the client level — browser, app, or terminal — the system randomly selects 50 nodes out of thousands.

This has three effects.

First, it removes the server bottleneck. Instead of sending one large message to one server, you send 1/40th of it — about 2.5% — to each of 50 nodes, across different networks, continents, and routes.

Second, interfering with message routing becomes far harder. Suppressing a message based on content or headers would require controlling at least 11 of the 50 randomly chosen nodes — highly unlikely with thousands of nodes active at any time.

Third, it introduces real accountability. The client receives cryptographic acknowledgements from all 50 nodes confirming receipt. If those nodes continue serving others while failing to route this message, that is unmistakable foul play.

Node operators could be required to stake funds as insurance. By signing a receipt, they attest that if they remain online and fail to route the message, the sender is entitled to compensation — automatically and within seconds.

In a Web3 setting, such a network can be economically self-sufficient. Users might deposit $10, maintain a $5 minimum balance, and pay $0.00001 per message. As the network grows and its token appreciates, early users may find their messaging effectively free.

Free — and secure.

Messages are encrypted. The system is on-chain, public keys are easy to distribute, enabling messages that are both signed and encrypted so only the intended recipient can read them.

“Only the intended recipient” can already mean complex conditions — multisig approvals, hardware-backed keys, or social consensus. A board decision might require N members to sign, or company funds released only after M treasury holders approve.

This is mathematically sound, physically executable, and beneficial to everyone — except those who profit from the current lack of accountability. We all want our services to behave this way, right?

One last point.

In this design, all Terms of Service are final. You will never receive an email saying they changed “because we care about our users”, or that you "must accept the new terms to continue using the service".

What you agreed to is guaranteed forever — unless the provider shuts down entirely, in which case compensation can be paid from an escrow account on another network. That compensation should exceed what the user paid — reasonable mathematically, since routing messages is really, really not that hard.

And if the provider wants to upgrade, they must ask. You choose whether to migrate. There is no technical way to force you — and that feels like the right default.

Aside from the overall “it’s already good enough” sentiment, and aside from large players aligning ever more closely with increasingly controlling regulators, what exactly prevents us from making the above a reality in, say, ten years?
I’m still trying to wrap my head around the economic and moral aspects of taxation when it comes to paying for online-first products.

These days, this most often means AI agents. Here’s a completely hypothetical example — “asking for a friend.”

A user is working in an IDE with an AI assistant and starts running close to their usage quota. They’re happy with the service and want to pay the provider more money to get additional help from this AI assistant.

Let’s say we’re talking about $10, just to keep things simple.

What the user is ultimately paying for is a SaaS offering. Somewhere, there is hardware hosted in a datacenter performing various tasks — mostly GPU inference, plus orchestration and supporting infrastructure. The service provider will keep a substantial portion of those $10 as profit.

From the user’s perspective, no one really cares where this hardware is physically located. There may be regulatory constraints — especially if the code is private or sensitive — but those concerns fall on the service provider. We’re talking about an individual user.

And let’s assume, for simplicity, that the code is open source, and the author is streaming their work 24/7, making their prompts and development process fully public domain.

At the end of the day, this is just a human being willing to pay $10 for a service that another entity is willing to provide.

I want to calibrate my thinking here on moral grounds.

Is it reasonable to charge this person sales tax based on their physical location?

Is it reasonable to question which entity ultimately receives this money, especially if the $10 is reimbursed by some corporation?

Who should be liable if it turns out that the entity the user paid is hosting its service in a sanctioned region? What if the user didn’t know? To what degree should they be responsible for knowing?

What is morally or ethically wrong if the user is behind a VPN? What if it’s a corporate VPN they are required to use in order to contribute to a particular codebase, and that VPN terminates in a country with no sales tax?

Can the government pursue the user or their umbrella company if the transaction is effectively a barter? For example, suppose the AI assistant provider opens a “token credit line” for $10,000 worth of usage, provided “for free,” as long as the developer allows that company to use the paid version of the very service they are building — also “for free.”

I’m trying to morally map the regulatory landscape as it exists today. Clearly, we don’t want people intentionally paying $10 to exploitative organizations just to save $0.50 in taxes.

But wouldn’t it be more moral to agree that, since the service can be provided from anywhere, it should not be subject to additional taxes at all? As in: use any form of payment you like, support a local vendor if you want, and we’ll do our best to make local support more attractive — rather than making it harder for people to optimize for cost effectiveness.

Something like negative taxation, even. If you’re willing to tolerate an extra ~200 ms of latency by accessing a datacenter farther away — perhaps somewhere sunny, where energy is effectively free — then the operator saves $2 out of those $10. Of those $2, $1 becomes additional operator profit, and the remaining $1 becomes a discount for the user.

I’m genuinely struggling to understand what exactly we are paying for, and how this is justified from a moral perspective. I’m not against taxes per se — I’m just strongly in favor of accountability, and of optimizing for effective resource utilization.

And introducing a sales tax on an online service that can be provided from virtually anywhere on or near Earth does not fit that model — unless I’m missing something important, in which case I’d very much like to be educated.
Yesterday I learned about:

git update-server-info
python3 -m http.server 8888

This makes your git repository clone-able from https://localhost:8888/.git

Comparing this to the rest of the industry, such as FlatBuffers ...

Error.

Vectors of unions are not yet supported in at least one of the specified programming languages.

This is a hard FlatBuffers limitation, not a tooling or version issue.


... I'd say the Unix way and the Linus way do have the potential to go places.

Rant: It's still giving me nightmares that we're not living in the world where a .wasm file is stored in the browser's Local Storage, updated on the go if the version has changed, and then the browser-side JavaScript can just natively import it and call functions from it, with no hundred-lines-long aux code. We indeed have moved away from the true path of software engineering some time 10+ years ago.
👍2😢1
[ Returning the rental BMW X2 ]

— Do you like the car?

— Meh. Literally the worst user experience interface I’ve seen in years. Getting wireless Car Play to work is a pain in the butt, and just using the cable does not work.

— I don’t know what you’re talking about, sir, but many customers have this complaint.

Well. BMW, I hope you are listening. Because this literally is the worst UX I’ve seen in years.
😁4😱1