PostgreSQL Pro | Database Mastery pinned Β«π Complete File Storage System β No S3 Required What's inside: π¦ COMPLETE SYSTEM (3 β) 1. FILE STORAGE SCHEMA - Files table with metadata + binary data - Folder/directory support - File versioning (keep history) - Soft delete with retentionβ¦Β»
β οΈ When NOT to store files in PostgreSQL. Honest take.
PostgreSQL file storage works great for most solo dev apps. But it's not always the right call. Here's where the line is.
KEEP FILES IN POSTGRESQL WHEN:
β Files are mostly small (<10 MB each)
Avatars, PDFs, CSVs, small images
β Files are tightly coupled to data
Invoice PDF belongs to invoice row
Deleting a user should delete their files
β You need transactional consistency
File upload + database insert succeed or fail together
No orphaned files, no missing references
β Your total file storage is under 50 GB
Fits comfortably on a standard VPS
β You want one backup for everything
Database backup = data backup + file backup
MOVE FILES OUT WHEN:
β You're serving files to many users simultaneously
PostgreSQL connections are expensive for streaming
A CDN serves static files much more efficiently
β Files are large (video, high-res images, datasets)
100 MB+ files bloat your database and slow backups
β Total storage exceeds 100 GB
Database size affects backup time and restore speed
β You need edge caching / global CDN
PostgreSQL is one server, not a global network
THE HYBRID APPROACH (best of both):
-- Store metadata in PostgreSQL
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
storage_type TEXT NOT NULL DEFAULT 'db',
-- For DB storage:
data BYTEA,
-- For external storage:
external_url TEXT,
uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Small files: store in data column
-- Large files: upload to S3, store URL in external_url
-- Your app checks storage_type and serves accordingly
Start with everything in PostgreSQL.
Move to hybrid when you have a reason to.
Not before.
Most solo dev apps never reach the point where PostgreSQL file storage becomes a problem. Don't optimize for scale you don't have.
What's your file storage situation? π
@postgres
PostgreSQL file storage works great for most solo dev apps. But it's not always the right call. Here's where the line is.
KEEP FILES IN POSTGRESQL WHEN:
β Files are mostly small (<10 MB each)
Avatars, PDFs, CSVs, small images
β Files are tightly coupled to data
Invoice PDF belongs to invoice row
Deleting a user should delete their files
β You need transactional consistency
File upload + database insert succeed or fail together
No orphaned files, no missing references
β Your total file storage is under 50 GB
Fits comfortably on a standard VPS
β You want one backup for everything
Database backup = data backup + file backup
MOVE FILES OUT WHEN:
β You're serving files to many users simultaneously
PostgreSQL connections are expensive for streaming
A CDN serves static files much more efficiently
β Files are large (video, high-res images, datasets)
100 MB+ files bloat your database and slow backups
β Total storage exceeds 100 GB
Database size affects backup time and restore speed
β You need edge caching / global CDN
PostgreSQL is one server, not a global network
THE HYBRID APPROACH (best of both):
-- Store metadata in PostgreSQL
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL,
filename TEXT NOT NULL,
mime_type TEXT NOT NULL,
size_bytes BIGINT NOT NULL,
storage_type TEXT NOT NULL DEFAULT 'db',
-- For DB storage:
data BYTEA,
-- For external storage:
external_url TEXT,
uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Small files: store in data column
-- Large files: upload to S3, store URL in external_url
-- Your app checks storage_type and serves accordingly
Start with everything in PostgreSQL.
Move to hybrid when you have a reason to.
Not before.
Most solo dev apps never reach the point where PostgreSQL file storage becomes a problem. Don't optimize for scale you don't have.
What's your file storage situation? π
@postgres
π Week 8 done. Files without S3 β covered.
This week:
β Monday β Why you probably don't need S3
β Tuesday β bytea vs Large Objects
β Wednesday β π° Complete file storage system (3β)
β Thursday β When to move files out (honest limits)
The takeaway: store files in PostgreSQL until you have a specific reason not to. For most solo dev SaaS apps, that reason never comes.
---
8 WEEKS IN
We've built a lot together:
Auth β Jobs β Performance β Search β Real-time β Backups β Multi-tenancy β File storage
That's a complete SaaS backend. All PostgreSQL. No external services.
---
WHAT'S NEXT?
We're entering month 3. A few directions we could go:
π΄ Caching (replace Redis for sessions and app cache)
π‘ Database migrations (zero-downtime, rollback-safe)
π’ Email & notifications (send from PostgreSQL, template engine)
π΅ Monitoring & observability (know what your database is doing)
Vote below π
Or tell me what you're struggling with right now. That's what we'll build.
See you Monday.
@postgres
This week:
β Monday β Why you probably don't need S3
β Tuesday β bytea vs Large Objects
β Wednesday β π° Complete file storage system (3β)
β Thursday β When to move files out (honest limits)
The takeaway: store files in PostgreSQL until you have a specific reason not to. For most solo dev SaaS apps, that reason never comes.
---
8 WEEKS IN
We've built a lot together:
Auth β Jobs β Performance β Search β Real-time β Backups β Multi-tenancy β File storage
That's a complete SaaS backend. All PostgreSQL. No external services.
---
WHAT'S NEXT?
We're entering month 3. A few directions we could go:
π΄ Caching (replace Redis for sessions and app cache)
π‘ Database migrations (zero-downtime, rollback-safe)
π’ Email & notifications (send from PostgreSQL, template engine)
π΅ Monitoring & observability (know what your database is doing)
Vote below π
Or tell me what you're struggling with right now. That's what we'll build.
See you Monday.
@postgres
β‘ You probably don't need Redis.
I know. Every tutorial says "add Redis for caching." Every boilerplate ships with it. Every "production-ready" checklist includes it.
But if you're a solo dev or small team, Redis means:
- Another service to run and monitor
- Another thing that crashes at 3 AM
- Another connection string to manage
- $10-30/month on managed hosting (Upstash, Redis Cloud)
- Session data in one place, everything else in another
PostgreSQL can do it all:
β Session storage
β Application cache (API responses, computed values)
β Rate limiting counters
β Feature flags
β Temporary data with auto-expiry
And your data stays in one place. One backup. One connection.
This week:
π Tuesday β How PostgreSQL caching works (UNLOGGED tables, materialized views)
π Wednesday β π° Complete caching system (3β)
π Thursday β When you actually need Redis (honest take)
π Friday β Check-in
Let's kill another unnecessary service.
What are you using Redis for right now? π
@postgres
I know. Every tutorial says "add Redis for caching." Every boilerplate ships with it. Every "production-ready" checklist includes it.
But if you're a solo dev or small team, Redis means:
- Another service to run and monitor
- Another thing that crashes at 3 AM
- Another connection string to manage
- $10-30/month on managed hosting (Upstash, Redis Cloud)
- Session data in one place, everything else in another
PostgreSQL can do it all:
β Session storage
β Application cache (API responses, computed values)
β Rate limiting counters
β Feature flags
β Temporary data with auto-expiry
And your data stays in one place. One backup. One connection.
This week:
π Tuesday β How PostgreSQL caching works (UNLOGGED tables, materialized views)
π Wednesday β π° Complete caching system (3β)
π Thursday β When you actually need Redis (honest take)
π Friday β Check-in
Let's kill another unnecessary service.
What are you using Redis for right now? π
@postgres
π§° Three PostgreSQL caching tools you already have.
TOOL 1: UNLOGGED TABLES
Regular tables write to WAL (write-ahead log) for crash safety. UNLOGGED tables skip that. Faster writes, faster reads. Data lost on crash β which is fine for cache.
CREATE UNLOGGED TABLE cache (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL DEFAULT now() + interval '1 hour',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_cache_expires ON cache(expires_at);
-- Write cache
INSERT INTO cache (key, value, expires_at)
VALUES ('user:123:profile', '{"name":"John"}', now() + interval '15 minutes')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at;
-- Read cache (only if not expired)
SELECT value FROM cache
WHERE key = 'user:123:profile' AND expires_at > now();
-- That's it. That's Redis GET/SET with TTL.
TOOL 2: MATERIALIZED VIEWS
Pre-compute expensive queries. Refresh on schedule.
CREATE MATERIALIZED VIEW dashboard_stats AS
SELECT
count(*) as total_users,
count(*) FILTER (WHERE created_at > now() - interval '7 days') as new_this_week,
count(*) FILTER (WHERE last_login > now() - interval '24 hours') as active_today
FROM users;
-- Refresh (takes milliseconds for reasonable tables)
REFRESH MATERIALIZED VIEW CONCURRENTLY dashboard_stats;
-- Query is instant β reads pre-computed results
SELECT * FROM dashboard_stats;
TOOL 3: GENERATED COLUMNS
Cache computed values directly in the row. Updated automatically.
ALTER TABLE products ADD COLUMN search_text TEXT
GENERATED ALWAYS AS (
name ' ' coalesce(description, '') ' ' coalesce(category, '')
) STORED;
-- No cache invalidation needed. PostgreSQL updates it on every write.
Tomorrow: the complete system β sessions, API cache, rate limiting, feature flags, all in PostgreSQL. 3β.
@postgres
TOOL 1: UNLOGGED TABLES
Regular tables write to WAL (write-ahead log) for crash safety. UNLOGGED tables skip that. Faster writes, faster reads. Data lost on crash β which is fine for cache.
CREATE UNLOGGED TABLE cache (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL DEFAULT now() + interval '1 hour',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_cache_expires ON cache(expires_at);
-- Write cache
INSERT INTO cache (key, value, expires_at)
VALUES ('user:123:profile', '{"name":"John"}', now() + interval '15 minutes')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at;
-- Read cache (only if not expired)
SELECT value FROM cache
WHERE key = 'user:123:profile' AND expires_at > now();
-- That's it. That's Redis GET/SET with TTL.
TOOL 2: MATERIALIZED VIEWS
Pre-compute expensive queries. Refresh on schedule.
CREATE MATERIALIZED VIEW dashboard_stats AS
SELECT
count(*) as total_users,
count(*) FILTER (WHERE created_at > now() - interval '7 days') as new_this_week,
count(*) FILTER (WHERE last_login > now() - interval '24 hours') as active_today
FROM users;
-- Refresh (takes milliseconds for reasonable tables)
REFRESH MATERIALIZED VIEW CONCURRENTLY dashboard_stats;
-- Query is instant β reads pre-computed results
SELECT * FROM dashboard_stats;
TOOL 3: GENERATED COLUMNS
Cache computed values directly in the row. Updated automatically.
ALTER TABLE products ADD COLUMN search_text TEXT
GENERATED ALWAYS AS (
name
) STORED;
-- No cache invalidation needed. PostgreSQL updates it on every write.
Tomorrow: the complete system β sessions, API cache, rate limiting, feature flags, all in PostgreSQL. 3β.
@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned Β«π Complete Caching System β Replace Redis With PostgreSQL What's inside: π¦ COMPLETE SYSTEM (3 β) 1. KEY-VALUE CACHE - UNLOGGED cache table with TTL - GET/SET/DELETE functions (same mental model as Redis) - Batch get/set - Auto-cleanup of expiredβ¦Β»
π€ When you actually need Redis. No dogma.
PostgreSQL caching works great for most solo dev apps. But Redis exists for a reason. Here's where the line is.
POSTGRESQL CACHE IS ENOUGH WHEN:
β You cache hundreds to low thousands of keys
β Cache reads happen tens of times per second, not thousands
β TTLs are minutes to hours (not sub-second)
β You want cached data in your backups
β You value simplicity over raw speed
CONSIDER REDIS WHEN:
β οΈ You need sub-millisecond reads at massive scale
PostgreSQL cache: ~1-5ms reads
Redis: ~0.1-0.5ms reads
Does your app notice the difference? Probably not.
β οΈ You're doing pub/sub at high volume
PostgreSQL LISTEN/NOTIFY works for moderate use
Redis pub/sub handles millions of messages/second
β οΈ You need sorted sets, HyperLogLog, streams
Redis has specialized data structures
PostgreSQL can approximate most of them but it's more work
β οΈ You're past 100K cache reads per second
At this scale PostgreSQL connections become the bottleneck
(You probably have bigger problems to solve first)
THE HONEST BENCHMARK:
For a typical SaaS with <10K users:
PostgreSQL Redis
Session lookup: 2ms 0.3ms
Cache read: 1-3ms 0.1-0.5ms
Cache write: 2-5ms 0.2-1ms
Feature flag: 1ms 0.2ms
Your API response time is 50-200ms.
The 1-2ms difference in cache reads is noise.
THE DECISION:
Are you a solo dev or small team?
β PostgreSQL. Every time.
Processing 10K+ requests/second with sub-ms latency needs?
β Add Redis for the hot path only. Keep PostgreSQL for everything else.
Building the next Twitter?
β You're not reading Telegram channels. You have a team.
Start without Redis. Add it when you have the metrics proving you need it.
Not when a tutorial told you to.
What's your experience? Anyone actually hit PostgreSQL cache limits? π
@postgres
PostgreSQL caching works great for most solo dev apps. But Redis exists for a reason. Here's where the line is.
POSTGRESQL CACHE IS ENOUGH WHEN:
β You cache hundreds to low thousands of keys
β Cache reads happen tens of times per second, not thousands
β TTLs are minutes to hours (not sub-second)
β You want cached data in your backups
β You value simplicity over raw speed
CONSIDER REDIS WHEN:
β οΈ You need sub-millisecond reads at massive scale
PostgreSQL cache: ~1-5ms reads
Redis: ~0.1-0.5ms reads
Does your app notice the difference? Probably not.
β οΈ You're doing pub/sub at high volume
PostgreSQL LISTEN/NOTIFY works for moderate use
Redis pub/sub handles millions of messages/second
β οΈ You need sorted sets, HyperLogLog, streams
Redis has specialized data structures
PostgreSQL can approximate most of them but it's more work
β οΈ You're past 100K cache reads per second
At this scale PostgreSQL connections become the bottleneck
(You probably have bigger problems to solve first)
THE HONEST BENCHMARK:
For a typical SaaS with <10K users:
PostgreSQL Redis
Session lookup: 2ms 0.3ms
Cache read: 1-3ms 0.1-0.5ms
Cache write: 2-5ms 0.2-1ms
Feature flag: 1ms 0.2ms
Your API response time is 50-200ms.
The 1-2ms difference in cache reads is noise.
THE DECISION:
Are you a solo dev or small team?
β PostgreSQL. Every time.
Processing 10K+ requests/second with sub-ms latency needs?
β Add Redis for the hot path only. Keep PostgreSQL for everything else.
Building the next Twitter?
β You're not reading Telegram channels. You have a team.
Start without Redis. Add it when you have the metrics proving you need it.
Not when a tutorial told you to.
What's your experience? Anyone actually hit PostgreSQL cache limits? π
@postgres
π The scariest command in production:
ALTER TABLE users ...
Every solo dev knows the feeling. You need to change your schema. Add a column. Rename a field. Drop a table. And your app is live. Users are active.
One wrong migration and:
- App crashes
- Data disappears
- Users see errors
- You're rolling back at midnight
It doesn't have to be like this.
PostgreSQL has tools that let you change your schema while your app is running. No downtime. No locked tables. No panic.
This week:
π Tuesday β Safe vs dangerous operations (what locks what)
π Wednesday β π° Zero-downtime migration toolkit (3β)
π Thursday β Rollback strategies (undo without losing data)
π Friday β Check-in
If you deploy to production, this is your week.
What's your worst migration horror story? π
@postgres
ALTER TABLE users ...
Every solo dev knows the feeling. You need to change your schema. Add a column. Rename a field. Drop a table. And your app is live. Users are active.
One wrong migration and:
- App crashes
- Data disappears
- Users see errors
- You're rolling back at midnight
It doesn't have to be like this.
PostgreSQL has tools that let you change your schema while your app is running. No downtime. No locked tables. No panic.
This week:
π Tuesday β Safe vs dangerous operations (what locks what)
π Wednesday β π° Zero-downtime migration toolkit (3β)
π Thursday β Rollback strategies (undo without losing data)
π Friday β Check-in
If you deploy to production, this is your week.
What's your worst migration horror story? π
@postgres
π₯2
π Some ALTER TABLE commands lock your entire table. Some don't. Know the difference.
SAFE β No lock (or very brief lock):
-- Add column with no default β
ALTER TABLE users ADD COLUMN bio TEXT;
-- Add column with DEFAULT (PostgreSQL 11+) β
ALTER TABLE users ADD COLUMN is_active BOOLEAN DEFAULT true;
-- Instant. PG stores the default in the catalog, doesn't rewrite rows.
-- Create index without blocking writes β
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Takes longer but doesn't lock the table.
-- Add a CHECK constraint without validation β
ALTER TABLE users ADD CONSTRAINT chk_age CHECK (age > 0) NOT VALID;
-- Applies to new rows only. Validate later.
-- Validate constraint separately β
ALTER TABLE users VALIDATE CONSTRAINT chk_age;
-- Scans existing rows but only takes a lightweight lock.
DANGEROUS β Locks the table (blocks reads AND writes):
-- Add column with volatile default β
ALTER TABLE users ADD COLUMN token TEXT DEFAULT gen_random_uuid()::TEXT;
-- Rewrites every row. Table locked until done.
-- Change column type β
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
-- Full table rewrite. Minutes of downtime on large tables.
-- Add NOT NULL to existing column (without default) β
ALTER TABLE users ALTER COLUMN name SET NOT NULL;
-- Scans every row to verify. Long lock on big tables.
-- Create index without CONCURRENTLY β
CREATE INDEX idx_users_name ON users(name);
-- Locks writes until the index is built.
THE RULE:
If it rewrites data or scans every row β it locks.
If it only changes metadata β it's instant.
When in doubt, test on a copy of your production data with the same row count. If it takes more than a few seconds, find the safe alternative.
Tomorrow: the complete migration system. Versioned, rollback-safe, zero-downtime patterns for every dangerous operation. 3β.
@postgres
SAFE β No lock (or very brief lock):
-- Add column with no default β
ALTER TABLE users ADD COLUMN bio TEXT;
-- Add column with DEFAULT (PostgreSQL 11+) β
ALTER TABLE users ADD COLUMN is_active BOOLEAN DEFAULT true;
-- Instant. PG stores the default in the catalog, doesn't rewrite rows.
-- Create index without blocking writes β
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Takes longer but doesn't lock the table.
-- Add a CHECK constraint without validation β
ALTER TABLE users ADD CONSTRAINT chk_age CHECK (age > 0) NOT VALID;
-- Applies to new rows only. Validate later.
-- Validate constraint separately β
ALTER TABLE users VALIDATE CONSTRAINT chk_age;
-- Scans existing rows but only takes a lightweight lock.
DANGEROUS β Locks the table (blocks reads AND writes):
-- Add column with volatile default β
ALTER TABLE users ADD COLUMN token TEXT DEFAULT gen_random_uuid()::TEXT;
-- Rewrites every row. Table locked until done.
-- Change column type β
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
-- Full table rewrite. Minutes of downtime on large tables.
-- Add NOT NULL to existing column (without default) β
ALTER TABLE users ALTER COLUMN name SET NOT NULL;
-- Scans every row to verify. Long lock on big tables.
-- Create index without CONCURRENTLY β
CREATE INDEX idx_users_name ON users(name);
-- Locks writes until the index is built.
THE RULE:
If it rewrites data or scans every row β it locks.
If it only changes metadata β it's instant.
When in doubt, test on a copy of your production data with the same row count. If it takes more than a few seconds, find the safe alternative.
Tomorrow: the complete migration system. Versioned, rollback-safe, zero-downtime patterns for every dangerous operation. 3β.
@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
β©οΈ Your migration broke something. Here's how to undo it.
Three levels of rollback, from easy to nuclear.
LEVEL 1: SCHEMA ROLLBACK
You added a column that breaks things. Just drop it.
-- You ran:
ALTER TABLE users ADD COLUMN middle_name TEXT;
-- Undo:
ALTER TABLE users DROP COLUMN middle_name;
Simple. No data loss (the column was new and empty anyway).
Works for: new columns, new indexes, new constraints, new tables.
LEVEL 2: DATA ROLLBACK
You ran an UPDATE that changed data you shouldn't have.
The trick: always create a backup column before modifying data.
-- Before migration:
ALTER TABLE users ADD COLUMN _email_backup TEXT;
UPDATE users SET _email_backup = email;
-- Run your migration:
UPDATE users SET email = lower(email);
-- Oh no, it broke something. Undo:
UPDATE users SET email = _email_backup;
ALTER TABLE users DROP COLUMN _email_backup;
Clunky? Yes. Saves you at 2 AM? Also yes.
LEVEL 3: POINT-IN-TIME RECOVERY
Everything is broken. The migration corrupted data across multiple tables. You need to go back in time.
If you set up WAL archiving (Week 6):
-- 1. Note the exact time BEFORE you ran the migration
-- 2. If it all goes wrong:
./pitr_restore.sh "2026-03-18 14:30:00"
-- Your entire database is back to that moment.
THE REAL LESSON:
Before running any migration on production:
1. Note the current time
2. Run a fresh backup: ./pg_backup_complete.sh dump
3. Test the migration on a copy first
4. Have the rollback script written BEFORE you run up
5. Then run it
5 minutes of prep. Saves hours of panic.
How do you handle migration rollbacks? π
@postgres
Three levels of rollback, from easy to nuclear.
LEVEL 1: SCHEMA ROLLBACK
You added a column that breaks things. Just drop it.
-- You ran:
ALTER TABLE users ADD COLUMN middle_name TEXT;
-- Undo:
ALTER TABLE users DROP COLUMN middle_name;
Simple. No data loss (the column was new and empty anyway).
Works for: new columns, new indexes, new constraints, new tables.
LEVEL 2: DATA ROLLBACK
You ran an UPDATE that changed data you shouldn't have.
The trick: always create a backup column before modifying data.
-- Before migration:
ALTER TABLE users ADD COLUMN _email_backup TEXT;
UPDATE users SET _email_backup = email;
-- Run your migration:
UPDATE users SET email = lower(email);
-- Oh no, it broke something. Undo:
UPDATE users SET email = _email_backup;
ALTER TABLE users DROP COLUMN _email_backup;
Clunky? Yes. Saves you at 2 AM? Also yes.
LEVEL 3: POINT-IN-TIME RECOVERY
Everything is broken. The migration corrupted data across multiple tables. You need to go back in time.
If you set up WAL archiving (Week 6):
-- 1. Note the exact time BEFORE you ran the migration
-- 2. If it all goes wrong:
./pitr_restore.sh "2026-03-18 14:30:00"
-- Your entire database is back to that moment.
THE REAL LESSON:
Before running any migration on production:
1. Note the current time
2. Run a fresh backup: ./pg_backup_complete.sh dump
3. Test the migration on a copy first
4. Have the rollback script written BEFORE you run up
5. Then run it
5 minutes of prep. Saves hours of panic.
How do you handle migration rollbacks? π
@postgres
π Your database is talking to you. You're just not listening.
PostgreSQL collects stats on everything:
- Which queries are slow
- Which tables are bloated
- Which indexes are never used
- How much cache you're hitting
- Where connections are going
Most solo devs never look at any of it. Then wonder why things are slow.
Paid monitoring tools want $50-500/month to show you this data. But PostgreSQL already has it. You just need to know where to look.
This week:
π Tuesday β The 5 views every dev should check weekly
π Wednesday β π° Complete monitoring dashboard (3β)
π Thursday β Finding and fixing slow queries
π Friday β Check-in
No Datadog. No pganalyze. No external agent. Just SQL.
When was the last time you checked your database stats? π
@postgres
PostgreSQL collects stats on everything:
- Which queries are slow
- Which tables are bloated
- Which indexes are never used
- How much cache you're hitting
- Where connections are going
Most solo devs never look at any of it. Then wonder why things are slow.
Paid monitoring tools want $50-500/month to show you this data. But PostgreSQL already has it. You just need to know where to look.
This week:
π Tuesday β The 5 views every dev should check weekly
π Wednesday β π° Complete monitoring dashboard (3β)
π Thursday β Finding and fixing slow queries
π Friday β Check-in
No Datadog. No pganalyze. No external agent. Just SQL.
When was the last time you checked your database stats? π
@postgres
π 5 queries. Run them once a week. Know exactly what's happening.
QUERY 1: TABLE BLOAT AND SIZE
SELECT
relname as table_name,
pg_size_pretty(pg_total_relation_size(oid)) as total_size,
n_live_tup as live_rows,
n_dead_tup as dead_rows,
CASE WHEN n_live_tup > 0
THEN round(100.0 * n_dead_tup / n_live_tup, 1)
ELSE 0 END as dead_pct
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(oid) DESC
LIMIT 10;
-- dead_pct > 20%? Run VACUUM ANALYZE on that table.
QUERY 2: UNUSED INDEXES (wasting disk and slowing writes)
SELECT
indexrelname as index_name,
relname as table_name,
pg_size_pretty(pg_relation_size(indexrelid)) as index_size,
idx_scan as times_used
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND pg_relation_size(indexrelid) > 1048576
ORDER BY pg_relation_size(indexrelid) DESC;
-- times_used = 0 and it's >1MB? Probably safe to drop.
QUERY 3: CACHE HIT RATIO
SELECT
sum(heap_blks_hit) as cache_hits,
sum(heap_blks_read) as disk_reads,
round(100.0 * sum(heap_blks_hit) /
nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2) as hit_ratio
FROM pg_statio_user_tables;
-- Below 95%? You need more shared_buffers or RAM.
-- Above 99%? Your database fits in memory. Nice.
QUERY 4: CONNECTION STATUS
SELECT
state,
count(*) as connections,
round(100.0 * count(*) /
current_setting('max_connections')::INT, 1) as pct_of_max
FROM pg_stat_activity
GROUP BY state
ORDER BY connections DESC;
-- idle connections > 50%? You need connection pooling.
-- total > 80% of max? Danger zone.
QUERY 5: SLOWEST QUERIES (requires pg_stat_statements)
SELECT
round(mean_exec_time::numeric, 2) as avg_ms,
calls,
round(total_exec_time::numeric, 0) as total_ms,
left(query, 80) as query_preview
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
-- If pg_stat_statements isn't enabled:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL. Worth it.
5 queries. 30 seconds. You now know more about your database than most teams with paid monitoring.
Tomorrow: the complete dashboard with alerting, health scores, and weekly reports. 3β.
@postgres
QUERY 1: TABLE BLOAT AND SIZE
SELECT
relname as table_name,
pg_size_pretty(pg_total_relation_size(oid)) as total_size,
n_live_tup as live_rows,
n_dead_tup as dead_rows,
CASE WHEN n_live_tup > 0
THEN round(100.0 * n_dead_tup / n_live_tup, 1)
ELSE 0 END as dead_pct
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(oid) DESC
LIMIT 10;
-- dead_pct > 20%? Run VACUUM ANALYZE on that table.
QUERY 2: UNUSED INDEXES (wasting disk and slowing writes)
SELECT
indexrelname as index_name,
relname as table_name,
pg_size_pretty(pg_relation_size(indexrelid)) as index_size,
idx_scan as times_used
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND pg_relation_size(indexrelid) > 1048576
ORDER BY pg_relation_size(indexrelid) DESC;
-- times_used = 0 and it's >1MB? Probably safe to drop.
QUERY 3: CACHE HIT RATIO
SELECT
sum(heap_blks_hit) as cache_hits,
sum(heap_blks_read) as disk_reads,
round(100.0 * sum(heap_blks_hit) /
nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0), 2) as hit_ratio
FROM pg_statio_user_tables;
-- Below 95%? You need more shared_buffers or RAM.
-- Above 99%? Your database fits in memory. Nice.
QUERY 4: CONNECTION STATUS
SELECT
state,
count(*) as connections,
round(100.0 * count(*) /
current_setting('max_connections')::INT, 1) as pct_of_max
FROM pg_stat_activity
GROUP BY state
ORDER BY connections DESC;
-- idle connections > 50%? You need connection pooling.
-- total > 80% of max? Danger zone.
QUERY 5: SLOWEST QUERIES (requires pg_stat_statements)
SELECT
round(mean_exec_time::numeric, 2) as avg_ms,
calls,
round(total_exec_time::numeric, 0) as total_ms,
left(query, 80) as query_preview
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
-- If pg_stat_statements isn't enabled:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL. Worth it.
5 queries. 30 seconds. You now know more about your database than most teams with paid monitoring.
Tomorrow: the complete dashboard with alerting, health scores, and weekly reports. 3β.
@postgres
β€2
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned Β«π Complete Monitoring Dashboard β See Everything, Pay Nothing What's inside: π¦ COMPLETE SYSTEM (3 β) 1. HEALTH CHECK VIEW - Single query returns overall database health score (0-100) - Cache hit ratio, connection usage, bloat, replication lag β¦Β»
π Finding and fixing slow queries. The 80/20 approach.
Step 1: Find the worst offenders.
-- Enable if not already:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL.
-- Top 5 by total time (these hurt your server the most)
SELECT
round(total_exec_time::numeric, 0) as total_ms,
calls,
round(mean_exec_time::numeric, 2) as avg_ms,
left(query, 100) as query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;
Step 2: Understand WHY it's slow.
-- Copy the slow query and run:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT ... your slow query here ...;
-- What to look for:
-- Seq Scan on large table β needs an index
-- Nested Loop with high row count β join strategy problem
-- Sort with external merge β needs more work_mem
-- Buffers: shared read (high) β data not in cache
Step 3: Common fixes.
FIX 1 β MISSING INDEX
-- Seq Scan on users WHERE email = ?
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Seq Scan β Index Scan. 500ms β 2ms.
FIX 2 β MISSING COMPOSITE INDEX
-- Seq Scan on orders WHERE user_id = ? AND status = ?
CREATE INDEX CONCURRENTLY idx_orders_user_status
ON orders(user_id, status);
-- Put the equality column first.
FIX 3 β N+1 QUERY PATTERN
-- Your app runs SELECT * FROM orders WHERE user_id = ?
-- once per user in a loop. 100 users = 100 queries.
-- Fix in your app:
SELECT * FROM orders WHERE user_id = ANY($1::UUID[]);
-- One query. Pass an array of user IDs.
FIX 4 β COUNT(*) ON LARGE TABLE
-- SELECT count(*) FROM orders; scans entire table.
-- Approximate count (instant):
SELECT n_live_tup FROM pg_stat_user_tables
WHERE relname = 'orders';
-- Good enough for dashboards.
FIX 5 β OVER-SELECTING COLUMNS
-- SELECT * FROM users; fetches every column including blobs.
-- Be specific:
SELECT id, email, name FROM users;
-- Less data transferred, faster query.
Most performance problems are a missing index or an N+1 pattern. Fix those two and you've solved 80% of slow queries.
What's your slowest query? π
@postgres
Step 1: Find the worst offenders.
-- Enable if not already:
-- ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Restart PostgreSQL.
-- Top 5 by total time (these hurt your server the most)
SELECT
round(total_exec_time::numeric, 0) as total_ms,
calls,
round(mean_exec_time::numeric, 2) as avg_ms,
left(query, 100) as query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 5;
Step 2: Understand WHY it's slow.
-- Copy the slow query and run:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT ... your slow query here ...;
-- What to look for:
-- Seq Scan on large table β needs an index
-- Nested Loop with high row count β join strategy problem
-- Sort with external merge β needs more work_mem
-- Buffers: shared read (high) β data not in cache
Step 3: Common fixes.
FIX 1 β MISSING INDEX
-- Seq Scan on users WHERE email = ?
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
-- Seq Scan β Index Scan. 500ms β 2ms.
FIX 2 β MISSING COMPOSITE INDEX
-- Seq Scan on orders WHERE user_id = ? AND status = ?
CREATE INDEX CONCURRENTLY idx_orders_user_status
ON orders(user_id, status);
-- Put the equality column first.
FIX 3 β N+1 QUERY PATTERN
-- Your app runs SELECT * FROM orders WHERE user_id = ?
-- once per user in a loop. 100 users = 100 queries.
-- Fix in your app:
SELECT * FROM orders WHERE user_id = ANY($1::UUID[]);
-- One query. Pass an array of user IDs.
FIX 4 β COUNT(*) ON LARGE TABLE
-- SELECT count(*) FROM orders; scans entire table.
-- Approximate count (instant):
SELECT n_live_tup FROM pg_stat_user_tables
WHERE relname = 'orders';
-- Good enough for dashboards.
FIX 5 β OVER-SELECTING COLUMNS
-- SELECT * FROM users; fetches every column including blobs.
-- Be specific:
SELECT id, email, name FROM users;
-- Less data transferred, faster query.
Most performance problems are a missing index or an N+1 pattern. Fix those two and you've solved 80% of slow queries.
What's your slowest query? π
@postgres
β€3
π Week 10 done. Migrations without fear.
This week:
β Monday β Why ALTER TABLE is terrifying (and doesn't have to be)
β Tuesday β Safe vs dangerous operations (know before you run)
β Wednesday β π° Complete migration system (3β)
β Thursday β Three levels of rollback
The takeaway: every migration should have a written rollback plan before you run it. Takes 5 minutes. Saves you from the worst night of your career.
---
10 WEEKS. THE FULL STACK.
Auth β Jobs β Performance β Search β Real-time β Backups β Multi-tenancy β File storage β Caching β Migrations
That's everything you need to build, run, and evolve a SaaS. All PostgreSQL.
If you joined recently: every week's free content is still here. Scroll back and catch up.
---
NEXT WEEK
Taking suggestions. What's missing from your PostgreSQL toolkit?
π΄ Email & notifications (triggers, templates, sending from PG)
π‘ Monitoring & observability (pg_stat, slow queries, dashboards)
π’ Data import/export (CSV, JSON, bulk operations)
π΅ Testing & CI (test your database layer properly)
Or something else entirely. Tell me π
@postgres
This week:
β Monday β Why ALTER TABLE is terrifying (and doesn't have to be)
β Tuesday β Safe vs dangerous operations (know before you run)
β Wednesday β π° Complete migration system (3β)
β Thursday β Three levels of rollback
The takeaway: every migration should have a written rollback plan before you run it. Takes 5 minutes. Saves you from the worst night of your career.
---
10 WEEKS. THE FULL STACK.
Auth β Jobs β Performance β Search β Real-time β Backups β Multi-tenancy β File storage β Caching β Migrations
That's everything you need to build, run, and evolve a SaaS. All PostgreSQL.
If you joined recently: every week's free content is still here. Scroll back and catch up.
---
NEXT WEEK
Taking suggestions. What's missing from your PostgreSQL toolkit?
π΄ Email & notifications (triggers, templates, sending from PG)
π‘ Monitoring & observability (pg_stat, slow queries, dashboards)
π’ Data import/export (CSV, JSON, bulk operations)
π΅ Testing & CI (test your database layer properly)
Or something else entirely. Tell me π
@postgres
β€1π1π₯1