PostgreSQL Pro | Database Mastery
1.32K subscribers
1 photo
24 links
๐Ÿ˜ PostgreSQL Mastery Hub

๐ŸŽฏ What you get:
- Daily optimization tips
- Performance guides
- Real-world solutions
- Query debugging help
- Production best practices

๐Ÿ“ˆ Join 500+ developers improving their PostgreSQL skills
Download Telegram
๐Ÿค When you actually need Redis. No dogma.

PostgreSQL caching works great for most solo dev apps. But Redis exists for a reason. Here's where the line is.

POSTGRESQL CACHE IS ENOUGH WHEN:

โœ… You cache hundreds to low thousands of keys
โœ… Cache reads happen tens of times per second, not thousands
โœ… TTLs are minutes to hours (not sub-second)
โœ… You want cached data in your backups
โœ… You value simplicity over raw speed

CONSIDER REDIS WHEN:

โš ๏ธ You need sub-millisecond reads at massive scale
PostgreSQL cache: ~1-5ms reads
Redis: ~0.1-0.5ms reads
Does your app notice the difference? Probably not.

โš ๏ธ You're doing pub/sub at high volume
PostgreSQL LISTEN/NOTIFY works for moderate use
Redis pub/sub handles millions of messages/second

โš ๏ธ You need sorted sets, HyperLogLog, streams
Redis has specialized data structures
PostgreSQL can approximate most of them but it's more work

โš ๏ธ You're past 100K cache reads per second
At this scale PostgreSQL connections become the bottleneck
(You probably have bigger problems to solve first)

THE HONEST BENCHMARK:

For a typical SaaS with <10K users:

PostgreSQL Redis
Session lookup: 2ms 0.3ms
Cache read: 1-3ms 0.1-0.5ms
Cache write: 2-5ms 0.2-1ms
Feature flag: 1ms 0.2ms

Your API response time is 50-200ms.
The 1-2ms difference in cache reads is noise.

THE DECISION:

Are you a solo dev or small team?
โ†’ PostgreSQL. Every time.

Processing 10K+ requests/second with sub-ms latency needs?
โ†’ Add Redis for the hot path only. Keep PostgreSQL for everything else.

Building the next Twitter?
โ†’ You're not reading Telegram channels. You have a team.

Start without Redis. Add it when you have the metrics proving you need it.
Not when a tutorial told you to.

What's your experience? Anyone actually hit PostgreSQL cache limits? ๐Ÿ‘‡

@postgres
๐Ÿ’€ The scariest command in production:

ALTER TABLE users ...

Every solo dev knows the feeling. You need to change your schema. Add a column. Rename a field. Drop a table. And your app is live. Users are active.

One wrong migration and:
- App crashes
- Data disappears
- Users see errors
- You're rolling back at midnight

It doesn't have to be like this.

PostgreSQL has tools that let you change your schema while your app is running. No downtime. No locked tables. No panic.

This week:

๐Ÿ“… Tuesday โ€” Safe vs dangerous operations (what locks what)
๐Ÿ“… Wednesday โ€” ๐Ÿ’ฐ Zero-downtime migration toolkit (3โญ)
๐Ÿ“… Thursday โ€” Rollback strategies (undo without losing data)
๐Ÿ“… Friday โ€” Check-in

If you deploy to production, this is your week.

What's your worst migration horror story? ๐Ÿ‘‡

@postgres
๐Ÿ”ฅ2
๐Ÿ”’ Some ALTER TABLE commands lock your entire table. Some don't. Know the difference.

SAFE โ€” No lock (or very brief lock):

-- Add column with no default โœ…
ALTER TABLE users ADD COLUMN bio TEXT;

-- Add column with DEFAULT (PostgreSQL 11+) โœ…
ALTER TABLE users ADD COLUMN is_active BOOLEAN DEFAULT true;
-- Instant. PG stores the default in the catalog, doesn't rewrite rows.

-- Create index without blocking writes โœ…
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Takes longer but doesn't lock the table.

-- Add a CHECK constraint without validation โœ…
ALTER TABLE users ADD CONSTRAINT chk_age CHECK (age > 0) NOT VALID;
-- Applies to new rows only. Validate later.

-- Validate constraint separately โœ…
ALTER TABLE users VALIDATE CONSTRAINT chk_age;
-- Scans existing rows but only takes a lightweight lock.

DANGEROUS โ€” Locks the table (blocks reads AND writes):

-- Add column with volatile default โŒ
ALTER TABLE users ADD COLUMN token TEXT DEFAULT gen_random_uuid()::TEXT;
-- Rewrites every row. Table locked until done.

-- Change column type โŒ
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
-- Full table rewrite. Minutes of downtime on large tables.

-- Add NOT NULL to existing column (without default) โŒ
ALTER TABLE users ALTER COLUMN name SET NOT NULL;
-- Scans every row to verify. Long lock on big tables.

-- Create index without CONCURRENTLY โŒ
CREATE INDEX idx_users_name ON users(name);
-- Locks writes until the index is built.

THE RULE:

If it rewrites data or scans every row โ†’ it locks.
If it only changes metadata โ†’ it's instant.

When in doubt, test on a copy of your production data with the same row count. If it takes more than a few seconds, find the safe alternative.

Tomorrow: the complete migration system. Versioned, rollback-safe, zero-downtime patterns for every dangerous operation. 3โญ.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
โ†ฉ๏ธ Your migration broke something. Here's how to undo it.

Three levels of rollback, from easy to nuclear.

LEVEL 1: SCHEMA ROLLBACK

You added a column that breaks things. Just drop it.

-- You ran:
ALTER TABLE users ADD COLUMN middle_name TEXT;

-- Undo:
ALTER TABLE users DROP COLUMN middle_name;

Simple. No data loss (the column was new and empty anyway).

Works for: new columns, new indexes, new constraints, new tables.

LEVEL 2: DATA ROLLBACK

You ran an UPDATE that changed data you shouldn't have.

The trick: always create a backup column before modifying data.

-- Before migration:
ALTER TABLE users ADD COLUMN _email_backup TEXT;
UPDATE users SET _email_backup = email;

-- Run your migration:
UPDATE users SET email = lower(email);

-- Oh no, it broke something. Undo:
UPDATE users SET email = _email_backup;
ALTER TABLE users DROP COLUMN _email_backup;

Clunky? Yes. Saves you at 2 AM? Also yes.

LEVEL 3: POINT-IN-TIME RECOVERY

Everything is broken. The migration corrupted data across multiple tables. You need to go back in time.

If you set up WAL archiving (Week 6):

-- 1. Note the exact time BEFORE you ran the migration
-- 2. If it all goes wrong:

./pitr_restore.sh "2026-03-18 14:30:00"

-- Your entire database is back to that moment.

THE REAL LESSON:

Before running any migration on production:

1. Note the current time
2. Run a fresh backup: ./pg_backup_complete.sh dump
3. Test the migration on a copy first
4. Have the rollback script written BEFORE you run up
5. Then run it

5 minutes of prep. Saves hours of panic.

How do you handle migration rollbacks? ๐Ÿ‘‡

@postgres
๐Ÿ” Your database is talking to you. You're just not listening.

PostgreSQL collects stats on everything:
- Which queries are slow
- Which tables are bloated
- Which indexes are never used
- How much cache you're hitting
- Where connections are going

Most solo devs never look at any of it. Then wonder why things are slow.

Paid monitoring tools want $50-500/month to show you this data. But PostgreSQL already has it. You just need to know where to look.

This week:

๐Ÿ“… Tuesday โ€” The 5 views every dev should check weekly
๐Ÿ“… Wednesday โ€” ๐Ÿ’ฐ Complete monitoring dashboard (3โญ)
๐Ÿ“… Thursday โ€” Finding and fixing slow queries
๐Ÿ“… Friday โ€” Check-in

No Datadog. No pganalyze. No external agent. Just SQL.

When was the last time you checked your database stats? ๐Ÿ‘‡

@postgres
๐Ÿ“Š 5 queries. Run them once a week. Know exactly what's happening.

QUERY 1: TABLE BLOAT AND SIZE

SELECT
relname as table_name,
pg_size_pretty(pg_total_relation_size(oid)) as total_size,
n_live_tup as live_rows,
n_dead_tup as dead_rows,
CASE WHEN n_live_tup > 0
THEN round(100.0 * n_dead_tup / n_live_tup, 1)
ELSE 0 END as dead_pct
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(oid) DESC
LIMIT 10;

-- dead_pct > 20%? Run VACUUM ANALYZE on that table.

QUERY 2: UNUSED INDEXES (wasting disk and slowing writes)

SELECT
indexrelname as index_name,
relname as table_name,
pg_size_pretty(pg_relation_size(indexrelid)) as index_size,
idx_scan as times_used
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND pg_relation_size(indexrelid) > 1048576
ORDER BY pg_relation_size(indexrelid) DESC;

-- times_used = 0 and it's >1MB? Probably safe to drop.

QUERY 3: CACHE HIT RATIO

SELECT
sum(heap_blks_hit) as cache_hits,
sum(heap_blks_read) as disk_reads,
round(100.0 * sum(heap_blks_hit) /
nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2) as hit_ratio
FROM pg_statio_user_tables;

-- Below 95%? You need more shared_buffers or RAM.
-- Above 99%? Your database fits in memory. Nice.

QUERY 4: CONNECTION STATUS

SELECT
state,
count(*) as connections,
round(100.0 * count(*) /
current_setting('max_connections')::INT, 1) as pct_of_max
FROM pg_stat_activity
GROUP BY state
ORDER BY connections DESC;

-- idle connections > 50%? You need connection pooling.
-- total > 80% of max? Danger zone.

QUERY 5: SLOWEST QUERIES (requires pg_stat_statements)

SELECT
round(mean_exec_time::numeric, 2) as avg_ms,
calls,
round(total_exec_time::numeric, 0) as total_ms,
left(query, 80) as query_preview
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;

-- If pg_stat_statements isn't enabled:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL. Worth it.

5 queries. 30 seconds. You now know more about your database than most teams with paid monitoring.

Tomorrow: the complete dashboard with alerting, health scores, and weekly reports. 3โญ.

@postgres
โค2
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned ยซ๐Ÿ” Complete Monitoring Dashboard โ€” See Everything, Pay Nothing What's inside: ๐Ÿ“ฆ COMPLETE SYSTEM (3 โญ) 1. HEALTH CHECK VIEW - Single query returns overall database health score (0-100) - Cache hit ratio, connection usage, bloat, replication lag โ€ฆยป
๐ŸŒ Finding and fixing slow queries. The 80/20 approach.

Step 1: Find the worst offenders.

-- Enable if not already:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL.

-- Top 5 by total time (these hurt your server the most)
SELECT
round(total_exec_time::numeric, 0) as total_ms,
calls,
round(mean_exec_time::numeric, 2) as avg_ms,
left(query, 100) as query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;

Step 2: Understand WHY it's slow.

-- Copy the slow query and run:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT ... your slow query here ...;

-- What to look for:
-- Seq Scan on large table โ†’ needs an index
-- Nested Loop with high row count โ†’ join strategy problem
-- Sort with external merge โ†’ needs more work_mem
-- Buffers: shared read (high) โ†’ data not in cache

Step 3: Common fixes.

FIX 1 โ€” MISSING INDEX
-- Seq Scan on users WHERE email = ?
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Seq Scan โ†’ Index Scan. 500ms โ†’ 2ms.

FIX 2 โ€” MISSING COMPOSITE INDEX
-- Seq Scan on orders WHERE user_id = ? AND status = ?
CREATE INDEX CONCURRENTLY idx_orders_user_status
ON orders(user_id, status);
-- Put the equality column first.

FIX 3 โ€” N+1 QUERY PATTERN
-- Your app runs SELECT * FROM orders WHERE user_id = ?
-- once per user in a loop. 100 users = 100 queries.
-- Fix in your app:
SELECT * FROM orders WHERE user_id = ANY($1::UUID[]);
-- One query. Pass an array of user IDs.

FIX 4 โ€” COUNT(*) ON LARGE TABLE
-- SELECT count(*) FROM orders; scans entire table.
-- Approximate count (instant):
SELECT n_live_tup FROM pg_stat_user_tables
WHERE relname = 'orders';
-- Good enough for dashboards.

FIX 5 โ€” OVER-SELECTING COLUMNS
-- SELECT * FROM users; fetches every column including blobs.
-- Be specific:
SELECT id, email, name FROM users;
-- Less data transferred, faster query.

Most performance problems are a missing index or an N+1 pattern. Fix those two and you've solved 80% of slow queries.

What's your slowest query? ๐Ÿ‘‡

@postgres
โค3
๐Ÿ“Š Week 10 done. Migrations without fear.

This week:

โœ… Monday โ€” Why ALTER TABLE is terrifying (and doesn't have to be)
โœ… Tuesday โ€” Safe vs dangerous operations (know before you run)
โœ… Wednesday โ€” ๐Ÿ’ฐ Complete migration system (3โญ)
โœ… Thursday โ€” Three levels of rollback

The takeaway: every migration should have a written rollback plan before you run it. Takes 5 minutes. Saves you from the worst night of your career.

---


10 WEEKS. THE FULL STACK.

Auth โ†’ Jobs โ†’ Performance โ†’ Search โ†’ Real-time โ†’ Backups โ†’ Multi-tenancy โ†’ File storage โ†’ Caching โ†’ Migrations

That's everything you need to build, run, and evolve a SaaS. All PostgreSQL.

If you joined recently: every week's free content is still here. Scroll back and catch up.

---

NEXT WEEK

Taking suggestions. What's missing from your PostgreSQL toolkit?

๐Ÿ”ด Email & notifications (triggers, templates, sending from PG)
๐ŸŸก Monitoring & observability (pg_stat, slow queries, dashboards)
๐ŸŸข Data import/export (CSV, JSON, bulk operations)
๐Ÿ”ต Testing & CI (test your database layer properly)

Or something else entirely. Tell me ๐Ÿ‘‡

@postgres
โค1๐Ÿ‘1๐Ÿ”ฅ1