OAuth 2.0 authorization in PostgreSQL using Keycloak as an example
Support for the OAuth 2.0 Device Authorization Flow has landed in Tantor Postgres 17.5.0 (and is heading for PostgreSQL 18). This means you can finally play the "log in via Keycloak" game directly with your database, offering a modern and secure access method that's perfect for cloud environments and microservice architectures.
This guide walks through the entire setup, showing how to get this new feature talking to Keycloak. We'll follow the full path — configuring the identity provider, preparing PostgreSQL, writing an OAuth token validator, and verifying the whole thing works from psql using the Device Flow.
Support for the OAuth 2.0 Device Authorization Flow has landed in Tantor Postgres 17.5.0 (and is heading for PostgreSQL 18). This means you can finally play the "log in via Keycloak" game directly with your database, offering a modern and secure access method that's perfect for cloud environments and microservice architectures.
This guide walks through the entire setup, showing how to get this new feature talking to Keycloak. We'll follow the full path — configuring the identity provider, preparing PostgreSQL, writing an OAuth token validator, and verifying the whole thing works from psql using the Device Flow.
How we boosted SQL query accuracy by 33% with LLMs
An LLM-based SQL generator seems like an obvious win. Just hook up a powerful model's API, grant it database access, and... fire your human analyst? This isn't some simple "replace the human" game. In reality, no company in its right mind will pipe sensitive data into an external API from OpenAI or Anthropic.
So, self-host? Good luck. Open-source models often choke on complex schemas or specific dialects like PostgreSQL 17, and training them is a costly nightmare. This whole "just use an LLM" idea is a non-trivial challenge. We dove into this mess and found a way to boost accuracy by 33%. Let's explore how to actually tackle this problem.
An LLM-based SQL generator seems like an obvious win. Just hook up a powerful model's API, grant it database access, and... fire your human analyst? This isn't some simple "replace the human" game. In reality, no company in its right mind will pipe sensitive data into an external API from OpenAI or Anthropic.
So, self-host? Good luck. Open-source models often choke on complex schemas or specific dialects like PostgreSQL 17, and training them is a costly nightmare. This whole "just use an LLM" idea is a non-trivial challenge. We dove into this mess and found a way to boost accuracy by 33%. Let's explore how to actually tackle this problem.
Privacy on Mobile: a practitioner’s checklist
Privacy has always been a high-stakes game, but the AI wave and our data-hungry economy have turned our phones into the main playing field. Every digital crumb is an asset. While some users are savvy, relying solely on "user awareness" is a losing strategy. The first line of defense isn't the user; it's the developer.
This isn't just another compliance lecture. It's a practitioner's mental model for how to frame decisions around privacy from the ground up. Let's dive into the concrete checklists and practical examples that help build that defense.
Privacy has always been a high-stakes game, but the AI wave and our data-hungry economy have turned our phones into the main playing field. Every digital crumb is an asset. While some users are savvy, relying solely on "user awareness" is a losing strategy. The first line of defense isn't the user; it's the developer.
This isn't just another compliance lecture. It's a practitioner's mental model for how to frame decisions around privacy from the ground up. Let's dive into the concrete checklists and practical examples that help build that defense.
Why "Test Everything" Destroys Your Data Quality (And 4 Tips to Fix It)
That "test everything" principle? It’s not improving data quality—it’s actively destroying it. Teams get buried in hundreds of useless alerts, creating so much noise that the really important signals get lost. Just ask Google and Monzo; they've already abandoned this approach.
The smart move is shifting from blanket testing to precise, targeted checks at nodes with the greatest impact radius. It turns out one well-placed test at the source is worth more than a hundred checks downstream. Let's dig into the four tips for making that shift and building products that actually work.
That "test everything" principle? It’s not improving data quality—it’s actively destroying it. Teams get buried in hundreds of useless alerts, creating so much noise that the really important signals get lost. Just ask Google and Monzo; they've already abandoned this approach.
The smart move is shifting from blanket testing to precise, targeted checks at nodes with the greatest impact radius. It turns out one well-placed test at the source is worth more than a hundred checks downstream. Let's dig into the four tips for making that shift and building products that actually work.
22 Affordable VPS/VDS Hosting Providers for Personal and Business Use (2025-2026)
Finding a VPS that is cheap, fast, and reliable often feels like an impossible triangle—you usually only get to pick two. When you're launching a pet project or scaling a business, picking the wrong host means future downtime and serious headaches.
We've reviewed 22 trusted providers relevant for 2025-2026, comparing them by the metrics that actually matter: real pricing (no hidden fees), uptime guarantees, features, and whether their support actually responds. Let's see who truly delivers on the balance of price and performance.
Finding a VPS that is cheap, fast, and reliable often feels like an impossible triangle—you usually only get to pick two. When you're launching a pet project or scaling a business, picking the wrong host means future downtime and serious headaches.
We've reviewed 22 trusted providers relevant for 2025-2026, comparing them by the metrics that actually matter: real pricing (no hidden fees), uptime guarantees, features, and whether their support actually responds. Let's see who truly delivers on the balance of price and performance.
LLM as a Resonance-Holographic Field of Meanings
Ask an LLM the same question twice, and you might get a flash of novel insight followed by something completely banal. We constantly argue: is it truly creative, or just a statistical parrot? Some see sparks of a new mind, while others (correctly) point out it's just an archive. The confusing part is that both arguments feel right.
This paradox might exist because we keep trying to analyze the LLM as a standalone object, which could be the wrong approach entirely. The crucial question isn't what the model knows or can do, but what it fundamentally is. Let's explore this "resonance-holographic" perspective.
Ask an LLM the same question twice, and you might get a flash of novel insight followed by something completely banal. We constantly argue: is it truly creative, or just a statistical parrot? Some see sparks of a new mind, while others (correctly) point out it's just an archive. The confusing part is that both arguments feel right.
This paradox might exist because we keep trying to analyze the LLM as a standalone object, which could be the wrong approach entirely. The crucial question isn't what the model knows or can do, but what it fundamentally is. Let's explore this "resonance-holographic" perspective.
What is design thinking and how to implement it in UX design
Design thinking is a customer-focused, non-linear iterative approach to finding creative solutions. It’s the process that guides cross-functional teams to deeply study their users, tackle complex problems, and genuinely think outside the box to build an intuitive, human-centered product.
It's less a rigid formula and more a mindset for driving real innovation. Let's dive into the core stages, principles, and goals of this important process and see the positive impact it can have on your design teams.
Design thinking is a customer-focused, non-linear iterative approach to finding creative solutions. It’s the process that guides cross-functional teams to deeply study their users, tackle complex problems, and genuinely think outside the box to build an intuitive, human-centered product.
It's less a rigid formula and more a mindset for driving real innovation. Let's dive into the core stages, principles, and goals of this important process and see the positive impact it can have on your design teams.
The LLM's Narrative Engine: A Critique of Prompting
If an LLM is a vast "holographic field" of probabilities, how does it decide what to say? A static landscape is just potential; something must drive the movement from one specific answer to the next. This is where the Narrative Engine hypothesis comes in.
This engine describes the dynamics of the LLM's "mind," not just its static structure. It's the mechanism that forces its probabilistic calculations to follow coherent pathways, essentially binding it to the rules of a story. This perspective changes everything about prompting: we aren't programming a machine, we are initiating a narrative. Let's delve into this critique.
If an LLM is a vast "holographic field" of probabilities, how does it decide what to say? A static landscape is just potential; something must drive the movement from one specific answer to the next. This is where the Narrative Engine hypothesis comes in.
This engine describes the dynamics of the LLM's "mind," not just its static structure. It's the mechanism that forces its probabilistic calculations to follow coherent pathways, essentially binding it to the rules of a story. This perspective changes everything about prompting: we aren't programming a machine, we are initiating a narrative. Let's delve into this critique.