PostgreSQL Pro | Database Mastery
1.32K subscribers
1 photo
24 links
🐘 PostgreSQL Mastery Hub

🎯 What you get:
- Daily optimization tips
- Performance guides
- Real-world solutions
- Query debugging help
- Production best practices

πŸ“ˆ Join 500+ developers improving their PostgreSQL skills
Download Telegram
🏒 Every SaaS has the same question: how do you serve 100 customers from one database?

Three approaches. Each with tradeoffs.

1️⃣ Shared tables + Row-Level Security
One set of tables. Every row has a tenant_id.
PostgreSQL enforces who sees what.

2️⃣ Schema-per-tenant
Same database, separate schema for each customer.
tenant_alice.users, tenant_bob.users

3️⃣ Database-per-tenant
Fully isolated. One database per customer.
Maximum separation, maximum overhead.

Most solo devs should start with #1.
It's the simplest, cheapest, and scales surprisingly far.

This week:

πŸ“… Tuesday β€” Shared tables with RLS (the setup)
πŸ“… Wednesday β€” πŸ’° Complete multi-tenant system (3⭐)
πŸ“… Thursday β€” When to switch approaches
πŸ“… Friday β€” Check-in

If you're building a SaaS, this is your week.

Which approach are you using right now? πŸ‘‡

@postgres
❀3
πŸ”’ Row-Level Security in 10 minutes. Your tenants can never see each other's data.

The idea: PostgreSQL checks every query automatically. No WHERE tenant_id = ? scattered through your code.

Step 1: Add tenant_id to every table

ALTER TABLE users ADD COLUMN tenant_id UUID NOT NULL;
ALTER TABLE orders ADD COLUMN tenant_id UUID NOT NULL;
ALTER TABLE products ADD COLUMN tenant_id UUID NOT NULL;

CREATE INDEX idx_users_tenant ON users(tenant_id);
CREATE INDEX idx_orders_tenant ON orders(tenant_id);
CREATE INDEX idx_products_tenant ON products(tenant_id);

Step 2: Create an app user (not the superuser)

CREATE USER app_user WITH PASSWORD 'strong_password';
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user;

Step 3: Enable RLS

ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
ALTER TABLE products ENABLE ROW LEVEL SECURITY;

Step 4: Create policies

CREATE POLICY tenant_isolation ON users
USING (tenant_id = current_setting('app.tenant_id')::UUID);

CREATE POLICY tenant_isolation ON orders
USING (tenant_id = current_setting('app.tenant_id')::UUID);

CREATE POLICY tenant_isolation ON products
USING (tenant_id = current_setting('app.tenant_id')::UUID);

Step 5: Set tenant in your app (every request)

-- At the start of each API request:
SET LOCAL app.tenant_id = 'uuid-of-current-tenant';

-- Now every query is automatically filtered:
SELECT * FROM users;
-- Only returns rows where tenant_id matches. Automatically.

That's it. No middleware. No query wrappers. PostgreSQL handles it at the database level.

Tomorrow: the complete system with signup, onboarding, admin access, cross-tenant queries, and all the edge cases. 3⭐.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned Β«πŸ” Complete Multi-Tenant System β€” Copy-Paste Ready What's inside: πŸ“¦ COMPLETE SYSTEM (3 ⭐) 1. TENANT MANAGEMENT - Tenant signup & provisioning - Tenant settings table - Plan/tier tracking - Soft delete & data retention 2. ROW-LEVEL SECURITY (FULL…»
πŸ€” Shared tables vs schema-per-tenant vs database-per-tenant. When to switch.

Here's the honest breakdown.

SHARED TABLES + RLS (start here)

βœ… One schema, one migration, one backup
βœ… Works with connection pooling out of the box
βœ… Scales to thousands of tenants easily
βœ… Simplest to build and maintain
❌ Noisy neighbor risk (one tenant's huge query slows everyone)
❌ Tenant data export is a filtered query, not a dump

Best for: Most SaaS apps. 1 to 10,000 tenants.

SCHEMA-PER-TENANT

βœ… Stronger isolation (separate tables)
βœ… Per-tenant backup/restore possible
βœ… Can customize schema per tenant
❌ Migrations run N times (once per tenant)
❌ Connection pooling gets complicated
❌ 500+ schemas starts feeling heavy

Best for: Apps where tenants need different configurations. Regulated industries where you need to prove data separation.

DATABASE-PER-TENANT

βœ… Maximum isolation
βœ… Independent backup, restore, scaling
βœ… Can run different PostgreSQL versions
❌ Connection management nightmare
❌ $20-50/month per tenant for managed DBs
❌ Migrations across 200 databases = pain

Best for: Enterprise customers who contractually require separate databases. Very few solo devs need this.

THE DECISION TREE:

Building your first SaaS?
β†’ Shared tables + RLS. Don't overthink it.

Have a paying customer demanding isolation?
β†’ Schema-per-tenant for that one customer.

Selling to banks/hospitals with compliance requirements?
β†’ Database-per-tenant, but charge accordingly.

ONE TRICK: You can mix approaches.

Most tenants on shared tables.
Premium/enterprise tenants on dedicated schemas.
Same app code. RLS handles shared. search_path handles schema.

-- For shared tenants:
SET LOCAL app.tenant_id = 'uuid';

-- For premium tenants:
SET search_path TO tenant_acme, public;

Start simple. Upgrade individual tenants when they pay for it.

What's your current setup? Thinking about changing? πŸ‘‡

@postgres
πŸ“Š Week 7 done. Multi-tenancy demystified.

This week:

βœ… Monday β€” Three approaches compared
βœ… Tuesday β€” RLS setup in 10 minutes
βœ… Wednesday β€” πŸ’° Complete multi-tenant system (3⭐)
βœ… Thursday β€” When to switch approaches

The takeaway: start with shared tables + RLS. Upgrade individual tenants when they pay enough to justify it.

---

RUNNING TALLY

Week 1 | Auth | 3⭐ | 1 βœ“
Week 2 | Jobs | 4⭐ | 0
Week 3 | Performance | 3⭐ | ?
Week 4 | Search | 3⭐ | ?
Week 5 | Real-time | 3⭐ | ?
Week 6 | Backups | 3⭐ | ?
Week 7 | Multi-tenancy | 3⭐ | ?

---

NEXT WEEK: YOU DECIDE

We've covered a lot of ground. What do you need most?

πŸ”΄ Caching with PostgreSQL (replace Redis for sessions, app cache)
🟑 File storage & uploads (store and serve files without S3)
🟒 Database migrations done right (zero-downtime, rollback-safe)
πŸ”΅ Monitoring & observability (know what your database is doing)

Vote below πŸ‘‡

See you Monday with whatever wins.

@postgres
πŸ‘1
πŸ“ You don't need S3 for file uploads. PostgreSQL handles it.

Controversial? Maybe. But hear me out.

If your app has:
- User avatars
- PDF invoices
- CSV exports
- Document attachments
- Images under 10 MB

You don't need a separate file storage service.

PostgreSQL has a built-in data type called bytea. It stores binary data directly in your database. And a feature called Large Objects for bigger files.

Why this matters for solo devs:

S3 + CloudFront: $5-25/month + complexity
Cloudinary: $0-89/month + API limits
DigitalOcean Spaces: $5/month minimum
PostgreSQL bytea: $0 extra (already in your database)

One less service. One less API key. One less thing that breaks at 2 AM.

This week:

πŸ“… Tuesday β€” How bytea and Large Objects work
πŸ“… Wednesday β€” πŸ’° Complete file system (3⭐)
πŸ“… Thursday β€” When PostgreSQL storage isn't enough
πŸ“… Friday β€” Check-in

Let's build a file system that lives next to your data. Where it belongs.

How do you handle file uploads today? πŸ‘‡

@postgres
πŸ—„οΈ Two ways to store files in PostgreSQL. Here's when to use each.

METHOD 1: bytea (Binary Data)

Store files directly in a column. Simple. Works for files up to ~50 MB.

-- Create a files table
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
data BYTEA NOT NULL,
uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE INDEX idx_files_tenant ON files(tenant_id);

-- Insert a file (from your app)
INSERT INTO files (tenant_id, filename, mime_type, size_bytes, data)
VALUES (
'tenant-uuid',
'invoice-001.pdf',
'application/pdf',
48230,
'\x255044462d312e34...' -- binary content
);

-- Retrieve it
SELECT filename, mime_type, data
FROM files WHERE id = 'file-uuid';

Pros: Simple. Backed up with your database. RLS works on files too.
Cons: Large files bloat your database. TOAST compression helps but has limits.

METHOD 2: Large Objects

PostgreSQL's built-in file system. Better for files over 50 MB.

-- Store a large file
SELECT lo_import('/path/to/bigfile.zip');
-- Returns an OID (object ID)

-- Track it in a table
CREATE TABLE large_files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
loid OID NOT NULL,
uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

-- Retrieve
SELECT lo_export(loid, '/tmp/download.zip')
FROM large_files WHERE id = 'file-uuid';

Pros: Handles files up to 4 TB. Streaming support.
Cons: Slightly more complex API. Cleanup needs lo_unlink().

THE SIMPLE ANSWER:

Files < 10 MB? β†’ bytea (avatars, thumbnails, small docs)
Files 10-100 MB? β†’ Large Objects or consider external storage
Files > 100 MB? β†’ External storage (S3, disk) with URL in DB

Most SaaS apps deal with files under 10 MB. bytea covers it.

Tomorrow: the complete file system with upload API, download endpoints, image resizing, and storage quotas. 3⭐.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned Β«πŸ” Complete File Storage System β€” No S3 Required What's inside: πŸ“¦ COMPLETE SYSTEM (3 ⭐) 1. FILE STORAGE SCHEMA - Files table with metadata + binary data - Folder/directory support - File versioning (keep history) - Soft delete with retention…»
⚠️ When NOT to store files in PostgreSQL. Honest take.

PostgreSQL file storage works great for most solo dev apps. But it's not always the right call. Here's where the line is.

KEEP FILES IN POSTGRESQL WHEN:

βœ… Files are mostly small (<10 MB each)
Avatars, PDFs, CSVs, small images

βœ… Files are tightly coupled to data
Invoice PDF belongs to invoice row
Deleting a user should delete their files

βœ… You need transactional consistency
File upload + database insert succeed or fail together
No orphaned files, no missing references

βœ… Your total file storage is under 50 GB
Fits comfortably on a standard VPS

βœ… You want one backup for everything
Database backup = data backup + file backup

MOVE FILES OUT WHEN:

❌ You're serving files to many users simultaneously
PostgreSQL connections are expensive for streaming
A CDN serves static files much more efficiently

❌ Files are large (video, high-res images, datasets)
100 MB+ files bloat your database and slow backups

❌ Total storage exceeds 100 GB
Database size affects backup time and restore speed

❌ You need edge caching / global CDN
PostgreSQL is one server, not a global network

THE HYBRID APPROACH (best of both):

-- Store metadata in PostgreSQL
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
storage_type TEXT NOT NULL DEFAULT 'db',
-- For DB storage:
data BYTEA,
-- For external storage:
external_url TEXT,
uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

-- Small files: store in data column
-- Large files: upload to S3, store URL in external_url

-- Your app checks storage_type and serves accordingly

Start with everything in PostgreSQL.
Move to hybrid when you have a reason to.
Not before.

Most solo dev apps never reach the point where PostgreSQL file storage becomes a problem. Don't optimize for scale you don't have.

What's your file storage situation? πŸ‘‡

@postgres
πŸ“Š Week 8 done. Files without S3 β€” covered.

This week:

βœ… Monday β€” Why you probably don't need S3
βœ… Tuesday β€” bytea vs Large Objects
βœ… Wednesday β€” πŸ’° Complete file storage system (3⭐)
βœ… Thursday β€” When to move files out (honest limits)

The takeaway: store files in PostgreSQL until you have a specific reason not to. For most solo dev SaaS apps, that reason never comes.

---


8 WEEKS IN

We've built a lot together:

Auth β†’ Jobs β†’ Performance β†’ Search β†’ Real-time β†’ Backups β†’ Multi-tenancy β†’ File storage

That's a complete SaaS backend. All PostgreSQL. No external services.

---

WHAT'S NEXT?

We're entering month 3. A few directions we could go:

πŸ”΄ Caching (replace Redis for sessions and app cache)
🟑 Database migrations (zero-downtime, rollback-safe)
🟒 Email & notifications (send from PostgreSQL, template engine)
πŸ”΅ Monitoring & observability (know what your database is doing)

Vote below πŸ‘‡

Or tell me what you're struggling with right now. That's what we'll build.

See you Monday.

@postgres
⚑ You probably don't need Redis.

I know. Every tutorial says "add Redis for caching." Every boilerplate ships with it. Every "production-ready" checklist includes it.

But if you're a solo dev or small team, Redis means:

- Another service to run and monitor
- Another thing that crashes at 3 AM
- Another connection string to manage
- $10-30/month on managed hosting (Upstash, Redis Cloud)
- Session data in one place, everything else in another

PostgreSQL can do it all:

βœ… Session storage
βœ… Application cache (API responses, computed values)
βœ… Rate limiting counters
βœ… Feature flags
βœ… Temporary data with auto-expiry

And your data stays in one place. One backup. One connection.

This week:

πŸ“… Tuesday β€” How PostgreSQL caching works (UNLOGGED tables, materialized views)
πŸ“… Wednesday β€” πŸ’° Complete caching system (3⭐)
πŸ“… Thursday β€” When you actually need Redis (honest take)
πŸ“… Friday β€” Check-in

Let's kill another unnecessary service.

What are you using Redis for right now? πŸ‘‡

@postgres
🧰 Three PostgreSQL caching tools you already have.

TOOL 1: UNLOGGED TABLES

Regular tables write to WAL (write-ahead log) for crash safety. UNLOGGED tables skip that. Faster writes, faster reads. Data lost on crash β€” which is fine for cache.

CREATE UNLOGGED TABLE cache (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL DEFAULT now() + interval '1 hour',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE INDEX idx_cache_expires ON cache(expires_at);

-- Write cache
INSERT INTO cache (key, value, expires_at)
VALUES ('user:123:profile', '{"name":"John"}', now() + interval '15 minutes')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at;

-- Read cache (only if not expired)
SELECT value FROM cache
WHERE key = 'user:123:profile' AND expires_at > now();

-- That's it. That's Redis GET/SET with TTL.

TOOL 2: MATERIALIZED VIEWS

Pre-compute expensive queries. Refresh on schedule.

CREATE MATERIALIZED VIEW dashboard_stats AS
SELECT
count(*) as total_users,
count(*) FILTER (WHERE created_at > now() - interval '7 days') as new_this_week,
count(*) FILTER (WHERE last_login > now() - interval '24 hours') as active_today
FROM users;

-- Refresh (takes milliseconds for reasonable tables)
REFRESH MATERIALIZED VIEW CONCURRENTLY dashboard_stats;

-- Query is instant β€” reads pre-computed results
SELECT * FROM dashboard_stats;

TOOL 3: GENERATED COLUMNS

Cache computed values directly in the row. Updated automatically.

ALTER TABLE products ADD COLUMN search_text TEXT
GENERATED ALWAYS AS (
name ' ' coalesce(description, '') ' ' coalesce(category, '')
) STORED;

-- No cache invalidation needed. PostgreSQL updates it on every write.

Tomorrow: the complete system β€” sessions, API cache, rate limiting, feature flags, all in PostgreSQL. 3⭐.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned Β«πŸ” Complete Caching System β€” Replace Redis With PostgreSQL What's inside: πŸ“¦ COMPLETE SYSTEM (3 ⭐) 1. KEY-VALUE CACHE - UNLOGGED cache table with TTL - GET/SET/DELETE functions (same mental model as Redis) - Batch get/set - Auto-cleanup of expired…»
🀝 When you actually need Redis. No dogma.

PostgreSQL caching works great for most solo dev apps. But Redis exists for a reason. Here's where the line is.

POSTGRESQL CACHE IS ENOUGH WHEN:

βœ… You cache hundreds to low thousands of keys
βœ… Cache reads happen tens of times per second, not thousands
βœ… TTLs are minutes to hours (not sub-second)
βœ… You want cached data in your backups
βœ… You value simplicity over raw speed

CONSIDER REDIS WHEN:

⚠️ You need sub-millisecond reads at massive scale
PostgreSQL cache: ~1-5ms reads
Redis: ~0.1-0.5ms reads
Does your app notice the difference? Probably not.

⚠️ You're doing pub/sub at high volume
PostgreSQL LISTEN/NOTIFY works for moderate use
Redis pub/sub handles millions of messages/second

⚠️ You need sorted sets, HyperLogLog, streams
Redis has specialized data structures
PostgreSQL can approximate most of them but it's more work

⚠️ You're past 100K cache reads per second
At this scale PostgreSQL connections become the bottleneck
(You probably have bigger problems to solve first)

THE HONEST BENCHMARK:

For a typical SaaS with <10K users:

PostgreSQL Redis
Session lookup: 2ms 0.3ms
Cache read: 1-3ms 0.1-0.5ms
Cache write: 2-5ms 0.2-1ms
Feature flag: 1ms 0.2ms

Your API response time is 50-200ms.
The 1-2ms difference in cache reads is noise.

THE DECISION:

Are you a solo dev or small team?
β†’ PostgreSQL. Every time.

Processing 10K+ requests/second with sub-ms latency needs?
β†’ Add Redis for the hot path only. Keep PostgreSQL for everything else.

Building the next Twitter?
β†’ You're not reading Telegram channels. You have a team.

Start without Redis. Add it when you have the metrics proving you need it.
Not when a tutorial told you to.

What's your experience? Anyone actually hit PostgreSQL cache limits? πŸ‘‡

@postgres
πŸ’€ The scariest command in production:

ALTER TABLE users ...

Every solo dev knows the feeling. You need to change your schema. Add a column. Rename a field. Drop a table. And your app is live. Users are active.

One wrong migration and:
- App crashes
- Data disappears
- Users see errors
- You're rolling back at midnight

It doesn't have to be like this.

PostgreSQL has tools that let you change your schema while your app is running. No downtime. No locked tables. No panic.

This week:

πŸ“… Tuesday β€” Safe vs dangerous operations (what locks what)
πŸ“… Wednesday β€” πŸ’° Zero-downtime migration toolkit (3⭐)
πŸ“… Thursday β€” Rollback strategies (undo without losing data)
πŸ“… Friday β€” Check-in

If you deploy to production, this is your week.

What's your worst migration horror story? πŸ‘‡

@postgres
πŸ”₯2
πŸ”’ Some ALTER TABLE commands lock your entire table. Some don't. Know the difference.

SAFE β€” No lock (or very brief lock):

-- Add column with no default βœ…
ALTER TABLE users ADD COLUMN bio TEXT;

-- Add column with DEFAULT (PostgreSQL 11+) βœ…
ALTER TABLE users ADD COLUMN is_active BOOLEAN DEFAULT true;
-- Instant. PG stores the default in the catalog, doesn't rewrite rows.

-- Create index without blocking writes βœ…
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Takes longer but doesn't lock the table.

-- Add a CHECK constraint without validation βœ…
ALTER TABLE users ADD CONSTRAINT chk_age CHECK (age > 0) NOT VALID;
-- Applies to new rows only. Validate later.

-- Validate constraint separately βœ…
ALTER TABLE users VALIDATE CONSTRAINT chk_age;
-- Scans existing rows but only takes a lightweight lock.

DANGEROUS β€” Locks the table (blocks reads AND writes):

-- Add column with volatile default ❌
ALTER TABLE users ADD COLUMN token TEXT DEFAULT gen_random_uuid()::TEXT;
-- Rewrites every row. Table locked until done.

-- Change column type ❌
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
-- Full table rewrite. Minutes of downtime on large tables.

-- Add NOT NULL to existing column (without default) ❌
ALTER TABLE users ALTER COLUMN name SET NOT NULL;
-- Scans every row to verify. Long lock on big tables.

-- Create index without CONCURRENTLY ❌
CREATE INDEX idx_users_name ON users(name);
-- Locks writes until the index is built.

THE RULE:

If it rewrites data or scans every row β†’ it locks.
If it only changes metadata β†’ it's instant.

When in doubt, test on a copy of your production data with the same row count. If it takes more than a few seconds, find the safe alternative.

Tomorrow: the complete migration system. Versioned, rollback-safe, zero-downtime patterns for every dangerous operation. 3⭐.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
↩️ Your migration broke something. Here's how to undo it.

Three levels of rollback, from easy to nuclear.

LEVEL 1: SCHEMA ROLLBACK

You added a column that breaks things. Just drop it.

-- You ran:
ALTER TABLE users ADD COLUMN middle_name TEXT;

-- Undo:
ALTER TABLE users DROP COLUMN middle_name;

Simple. No data loss (the column was new and empty anyway).

Works for: new columns, new indexes, new constraints, new tables.

LEVEL 2: DATA ROLLBACK

You ran an UPDATE that changed data you shouldn't have.

The trick: always create a backup column before modifying data.

-- Before migration:
ALTER TABLE users ADD COLUMN _email_backup TEXT;
UPDATE users SET _email_backup = email;

-- Run your migration:
UPDATE users SET email = lower(email);

-- Oh no, it broke something. Undo:
UPDATE users SET email = _email_backup;
ALTER TABLE users DROP COLUMN _email_backup;

Clunky? Yes. Saves you at 2 AM? Also yes.

LEVEL 3: POINT-IN-TIME RECOVERY

Everything is broken. The migration corrupted data across multiple tables. You need to go back in time.

If you set up WAL archiving (Week 6):

-- 1. Note the exact time BEFORE you ran the migration
-- 2. If it all goes wrong:

./pitr_restore.sh "2026-03-18 14:30:00"

-- Your entire database is back to that moment.

THE REAL LESSON:

Before running any migration on production:

1. Note the current time
2. Run a fresh backup: ./pg_backup_complete.sh dump
3. Test the migration on a copy first
4. Have the rollback script written BEFORE you run up
5. Then run it

5 minutes of prep. Saves hours of panic.

How do you handle migration rollbacks? πŸ‘‡

@postgres