PostgreSQL Pro | Database Mastery
1.32K subscribers
1 photo
24 links
๐Ÿ˜ PostgreSQL Mastery Hub

๐ŸŽฏ What you get:
- Daily optimization tips
- Performance guides
- Real-world solutions
- Query debugging help
- Production best practices

๐Ÿ“ˆ Join 500+ developers improving their PostgreSQL skills
Download Telegram
๐ŸŽฏ FUZZY MATCHING WITH PG_TRGM

Full-text search is great.
But what about typos?

"postgre" should find "postgresql"
"recieved" should find "received"

Enter: pg_trgm

---

ENABLE IT
CREATE EXTENSION pg_trgm;


That's it. Now you have fuzzy matching.

---

HOW IT WORKS

Trigrams = 3-character chunks

"postgresql" โ†’ {" po", "pos", "ost", "stg", "tgr", "gre", "res", "esq", "sql", "ql "}

Similar words share trigrams.
More shared = more similar.

---

SIMILARITY SEARCH
-- Find similar to "postgre"
SELECT name, similarity(name, 'postgre') as sim
FROM products
WHERE similarity(name, 'postgre') > 0.3
ORDER BY sim DESC;


---

MAKE IT FAST
-- GIN index for similarity
CREATE INDEX idx_products_name_trgm
ON products USING GIN(name gin_trgm_ops);

-- Now this is fast:
SELECT * FROM products
WHERE name % 'postgre' -- % means "similar to"
ORDER BY name <-> 'postgre'; -- <-> is distance


---

ILIKE THAT'S FAST

Remember how LIKE '%search%' is slow?
-- With pg_trgm, this uses the index:
SELECT * FROM products
WHERE name ILIKE '%data%';

-- Because the GIN index supports it!


---

COMBINE WITH FULL-TEXT
-- Best of both worlds
SELECT title,
ts_rank(search_vector, query) as relevance,
similarity(title, $1) as fuzzy_score
FROM posts, to_tsquery('english', $1) query
WHERE search_vector @@ query
OR title % $1
ORDER BY relevance + fuzzy_score DESC;


Finds exact matches AND fuzzy matches.

---

THAT'S THE WHOLE SEARCH STACK

- tsvector + tsquery = full-text
- pg_trgm = fuzzy/typo tolerance
- GIN indexes = fast

No external services.

#January2026
50โค1
๐Ÿ“Š JANUARY RETROSPECTIVE

First month of 2026: done.

---

WHAT WE COVERED

Week 1: Auth queries
Week 2: Background jobs
Week 3: Performance & indexes
Week 4: Full-text search

Four fundamental patterns.
Zero external dependencies.
Pure PostgreSQL.

---

THE THEME

"You don't need that service."

Auth โ†’ No Auth0
Jobs โ†’ No Redis
Search โ†’ No Algolia
Everything โ†’ PostgreSQL

Not because services are bad.
Because simple is good.

---

WHAT WORKED

โœ“ Shorter, focused content
โœ“ Copy-paste ready code
โœ“ Lower prices (3 โญ)
โœ“ One topic per week

---

FEBRUARY PREVIEW

Thinking about:
- Real-time with LISTEN/NOTIFY
- Multi-tenancy patterns
- Backup & recovery
- API rate limiting

What do you want to see?

---

HONEST QUESTION

Is this content useful?

- ๐Ÿ”ฅ Yes, using it in real projects
- ๐Ÿ“š Yes, learning for future
- ๐Ÿ˜ Meh, not really
- ๐Ÿ‘Ž No, want something different

Reply or react.
Helps me know what to make.

---

THANK YOU

1,000+ of you here now.
That's not nothing.

See you in February.

@postgres

#January2026
50๐Ÿ”ฅ1
๐Ÿ“ก FEBRUARY KICKOFF: REAL-TIME WEEK

New month. New topic.

January covered:
- Auth โœ“
- Background jobs โœ“
- Performance โœ“
- Full-text search โœ“

February starts with: Real-time.

---

THE PROBLEM

Your app needs live updates:
- New message notifications
- Activity feeds
- Live dashboards
- Collaborative features

The "normal" solution:
- Pusher: $49/month
- Ably: $29/month
- Firebase: Usage-based $$
- Socket.io + Redis: Complexity

---

THE POSTGRESQL WAY

LISTEN/NOTIFY.

Built into PostgreSQL since forever.
Zero extra services.
Zero extra cost.
-- That's it. That's real-time.
NOTIFY my_channel, 'Hello from the database';


---

THIS WEEK

Tue: How LISTEN/NOTIFY works
Wed: ๐Ÿ’ฐ Copy-paste real-time system (3 โญ)
Thu: WebSocket patterns
Fri: Week 5 check-in

---

JANUARY RESULTS

4 weeks. 4 topics. 4 utilities.

Small wins > big masterclasses.

Let's keep going.

#February2026
50โค2
๐Ÿ“ก HOW LISTEN/NOTIFY WORKS

PostgreSQL has built-in pub/sub.
Most people don't know this exists.

---

THE BASICS
-- Connection 1: Subscribe
LISTEN order_updates;

-- Connection 2: Publish
NOTIFY order_updates, '{"order_id": 123, "status": "shipped"}';

-- Connection 1 receives it instantly


That's the whole API.
LISTEN. NOTIFY. Done.

---

FROM YOUR APPLICATION
-- Trigger on table changes
CREATE OR REPLACE FUNCTION notify_order_change()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('order_updates', json_build_object(
'action', TG_OP,
'order_id', NEW.id,
'status', NEW.status
)::TEXT);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER order_changed
AFTER INSERT OR UPDATE ON orders
FOR EACH ROW EXECUTE FUNCTION notify_order_change();


Now every order change broadcasts automatically.

---

IN NODE.JS
const { Client } = require('pg');
const client = new Client();

await client.connect();
await client.query('LISTEN order_updates');

client.on('notification', (msg) => {
const data = JSON.parse(msg.payload);
console.log('Order updated:', data);
// Send to WebSocket clients
});


---

LIMITATIONS

- Payload max: 8KB (enough for IDs and small data)
- No persistence (miss it if not listening)
- No delivery guarantee

For most apps? Totally fine.
For critical data? Store first, notify second.

---

Tomorrow: The complete system.

Notifications, activity feeds, live dashboards.
All copy-paste ready.

3 โญ

#February2026
83
This media is not supported in the widget
VIEW IN TELEGRAM
50
PostgreSQL Pro | Database Mastery pinned ยซ๐Ÿ“ก COPY-PASTE REAL-TIME SYSTEM Live updates without Pusher. No monthly bill. No external service. --- WHAT'S INSIDE ๐Ÿ“‹ Notification System - User notifications table - Real-time delivery via NOTIFY - Unread count tracking - Mark as read ๐Ÿ“‹ Activity Feedโ€ฆยป
๐Ÿ”Œ WEBSOCKET PATTERNS

LISTEN/NOTIFY gets data out of PostgreSQL.
WebSockets get it to the browser.

Here's how to connect them.

---

THE SIMPLE PATTERN

PostgreSQL โ†’ Node.js listener โ†’ WebSocket server โ†’ Browser

// server.js
const WebSocket = require('ws');
const { Client } = require('pg');

const wss = new WebSocket.Server({ port: 8080 });


const pg = new Client();

// Listen to PostgreSQL
pg.connect();
pg.query('LISTEN updates');

pg.on('notification', (msg) => {
// Broadcast to all connected clients
wss.clients.forEach(client => {
if (client.readyState === WebSocket.OPEN) {
client.send(msg.payload);
}
});
});


---

USER-SPECIFIC CHANNELS
// Track which users are connected
const userConnections = new Map();

wss.on('connection', (ws, req) => {
const userId = authenticateConnection(req);

if (!userConnections.has(userId)) {
userConnections.set(userId, new Set());
// Subscribe to user's channel
pg.query(`LISTEN user_${userId}`);
}
userConnections.get(userId).add(ws);

ws.on('close', () => {
userConnections.get(userId).delete(ws);
});
});

// In notification handler
pg.on('notification', (msg) => {
const userId = msg.channel.replace('user_', '');
const connections = userConnections.get(userId);

connections?.forEach(ws => {
ws.send(msg.payload);
});
});


---

ROOM-BASED (LIKE CHAT)
-- Notify a specific room
SELECT pg_notify(
'room_' || room_id::TEXT,
json_build_object('message', content, 'user', sender)::TEXT
);

// Join room
ws.on('message', (data) => {
const { action, roomId } = JSON.parse(data);

if (action === 'join') {
pg.query(`LISTEN room_${roomId}`);
// Track membership...
}
});


---

SCALING NOTE

One Node.js process = one LISTEN connection.
For multiple servers, each listens independently.

PostgreSQL handles the fan-out.
No Redis pub/sub needed.

---

How would you use real-time in your app?

#February2026
50
๐Ÿ“Š WEEK 5 CHECK-IN

Real-time week: done.

---

THIS WEEK

Mon: February kickoff, LISTEN/NOTIFY intro
Tue: How it works + triggers
Wed: Complete real-time system ๐Ÿ’ฐ
Thu: WebSocket patterns

---

THE KEY INSIGHT

PostgreSQL already has pub/sub.

LISTEN/NOTIFY has been there since PostgreSQL 9.0.
That was 2010.

Most people still don't know it exists.

Now you do.

---

WHEN TO USE IT

โœ… Good for:
- Notifications (new message, new follower)
- Activity feeds
- Live dashboards
- Presence (who's online)
- Simple chat

โš ๏ธ Consider alternatives for:
- High-frequency updates (100+ per second)
- Messages larger than 8KB
- Guaranteed delivery requirements

---

WHAT DID YOU BUILD?

- ๐Ÿ“ก Added real-time to my app
- ๐Ÿ”” Set up notification system
- ๐Ÿ“š Just learning
- ๐Ÿค” Still figuring out use case

---

FEBRUARY PREVIEW

Week 6: Backups & Recovery (don't lose everything)
Week 7: Multi-tenancy patterns
Week 8: TBD - you decide

What should Week 8 be?

---

5 WEEKS DOWN

We've covered:
1. Auth
2. Background jobs
3. Performance
4. Full-text search
5. Real-time

That's a complete PostgreSQL toolkit.

More coming.

#February2026
50
๐Ÿ”ฅ Your backups are probably broken. You just don't know it yet.

Here's how most solo devs handle backups:

Step 1: Set up pg_dump in a cron job
Step 2: Forget about it
Step 3: Server dies 8 months later
Step 4: Discover the cron job stopped working 6 months ago
Step 5: Cry


Sound familiar?
The problem isn't making backups. It's making backups that actually work when you need them.

This week we fix that:

๐Ÿ“… Tuesday โ€” The three backup methods and when to use each
๐Ÿ“… Wednesday โ€” ๐Ÿ’ฐ Complete backup & recovery system you can copy-paste (3โญ)
๐Ÿ“… Thursday โ€” How to test your backups (the step everyone skips)
๐Ÿ“… Friday โ€” Check-in

No managed service needed. No S3 bill surprise. Just PostgreSQL doing what PostgreSQL does well.

How do you handle backups right now? Be honest โ€” no judgment ๐Ÿ‘‡

@postgres
501
๐Ÿ› ๏ธ Three backup methods. Pick the right one.

PostgreSQL gives you three options. Each solves a different problem.

1. pg_dump โ€” The snapshot

bash# Full database dump, compressed

pg_dump -Fc -Z 6 mydb > backup_$(date +%Y%m%d).dump

# Restore

pg_restore -d mydb backup_20260210.dump


Good for: Small databases (<10 GB), simple projects, quick exports.
Bad for: Large databases (slow), point-in-time recovery (impossible).

2. pg_basebackup โ€” The full copy

bash# Complete cluster backup

pg_basebackup -D /backups/base \
-Ft -z -P -v \
-U replicator

Good for: Full server restore, setting up replicas.

Bad for: Frequent backups (copies everything every time).


3. WAL archiving โ€” The timeline

sql-- Enable in postgresql.conf
ALTER SYSTEM SET wal_level = replica;
ALTER SYSTEM SET archive_mode = on;
ALTER SYSTEM SET archive_command =
'cp %p /backups/wal/%f';
ALTER SYSTEM SET archive_timeout = 300;
SELECT pg_reload_conf();

Good for: Point-in-time recovery, minimal data loss, large databases.

Bad for: Nothing. This is what you want.

The solo dev answer:

Database < 5 GB?
โ†’ pg_dump daily + WAL archiving

Database 5-50 GB?
โ†’ pg_basebackup weekly + WAL archiving

Database > 50 GB?
โ†’ pg_basebackup weekly + WAL archiving + incremental strategy

WAL archiving is the common thread. It gives you something the others can't: go back to any point in time.

Accidentally deleted production data at 2:47 PM?

Restore base backup โ†’ replay WAL โ†’ stop at 2:46 PM

Everything back. Like it never happened.

Tomorrow: the complete system that ties all three together.

Copy-paste ready. 3โญ.

@postgres
๐Ÿ‘1
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned ยซ๐Ÿ” Copy-Paste Backup & Recovery System What's inside: ๐Ÿ“ฆ COMPLETE SYSTEM (3 โญ) 1. AUTOMATED BACKUP SCRIPT - pg_dump for daily snapshots - pg_basebackup for weekly full copies - WAL archiving for continuous protection - Rotation (keep 7 daily, 4โ€ฆยป
๐Ÿงช Your backup is worthless until you test the restore.
The number one backup mistake: assuming it works.
Here's a dead simple restore test you can run right now:

#!/bin/bash
# test_restore.sh โ€” Run this weekly

TEST_DB="restore_test_$(date +%s)"
BACKUP_FILE=$(ls -t /backups/dumps/*.dump | head -1)

echo "Testing restore of: $BACKUP_FILE"

# Create temporary database
createdb $TEST_DB

# Restore into it
pg_restore -d $TEST_DB $BACKUP_FILE 2>/dev/null

# Verify: count tables
TABLE_COUNT=$(psql -t -A -d $TEST_DB \
-c "SELECT count(*) FROM information_schema.tables
WHERE table_schema = 'public'")

# Verify: count rows in your main table
ROW_COUNT=$(psql -t -A -d $TEST_DB \
-c "SELECT count(*) FROM users" 2>/dev/null || echo "0")

# Cleanup
dropdb $TEST_DB

# Report
echo "Tables restored: $TABLE_COUNT"
echo "Users found: $ROW_COUNT"

if [ "$TABLE_COUNT" -gt 0 ] && [ "$ROW_COUNT" -gt 0 ]; then
echo "โœ… Backup is valid"
else
echo "โŒ BACKUP FAILED โ€” investigate immediately"
# Add your alert here: curl webhook, send email, etc.
fi

Schedule it:

# Every Sunday at 3 AM
echo "0 3 * * 0 /usr/local/bin/test_restore.sh \
>> /var/log/backup_test.log 2>&1" | crontab -


Three things to check when testing:
1. Can you restore at all?
Sounds obvious. Many dumps silently fail (permissions, disk space, version mismatch).
2. Is the data complete?
Row counts should match production within the backup window.
3. How long does restore take?
If your 50 GB database takes 4 hours to restore, that's your minimum downtime. Plan accordingly.
The script above takes 2 minutes to set up. Runs automatically. Tells you immediately if something breaks.
Two minutes now vs finding out during an emergency at 3 AM.
When was the last time you tested a restore? ๐Ÿ‘‡
@postgres
284
๐Ÿ“Š Week 6 done. Your backups should be smarter now.

Quick recap:

โœ… Monday โ€” The honest truth about backup failures
โœ… Tuesday โ€” Three methods: pg_dump, pg_basebackup, WAL archiving
โœ… Wednesday โ€” ๐Ÿ’ฐ Complete backup system (3โญ)
โœ… Thursday โ€” Restore testing script

The one thing to take away this week: backups without tested restores are just wasted disk space.


NEXT WEEK: MULTI-TENANCY PATTERNS

Week 7 tackles the question every SaaS builder faces: how do you serve multiple customers from one database?

Shared tables with row-level security
Schema-per-tenant
When to pick which
The copy-paste RLS setup

One of the most requested topics. See you Monday.
What did you implement this week? ๐Ÿ‘‡
@postgres
๐Ÿข Every SaaS has the same question: how do you serve 100 customers from one database?

Three approaches. Each with tradeoffs.

1๏ธโƒฃ Shared tables + Row-Level Security
One set of tables. Every row has a tenant_id.
PostgreSQL enforces who sees what.

2๏ธโƒฃ Schema-per-tenant
Same database, separate schema for each customer.
tenant_alice.users, tenant_bob.users

3๏ธโƒฃ Database-per-tenant
Fully isolated. One database per customer.
Maximum separation, maximum overhead.

Most solo devs should start with #1.
It's the simplest, cheapest, and scales surprisingly far.

This week:

๐Ÿ“… Tuesday โ€” Shared tables with RLS (the setup)
๐Ÿ“… Wednesday โ€” ๐Ÿ’ฐ Complete multi-tenant system (3โญ)
๐Ÿ“… Thursday โ€” When to switch approaches
๐Ÿ“… Friday โ€” Check-in

If you're building a SaaS, this is your week.

Which approach are you using right now? ๐Ÿ‘‡

@postgres
โค3
๐Ÿ”’ Row-Level Security in 10 minutes. Your tenants can never see each other's data.

The idea: PostgreSQL checks every query automatically. No WHERE tenant_id = ? scattered through your code.

Step 1: Add tenant_id to every table

ALTER TABLE users ADD COLUMN tenant_id UUID NOT NULL;
ALTER TABLE orders ADD COLUMN tenant_id UUID NOT NULL;
ALTER TABLE products ADD COLUMN tenant_id UUID NOT NULL;

CREATE INDEX idx_users_tenant ON users(tenant_id);
CREATE INDEX idx_orders_tenant ON orders(tenant_id);
CREATE INDEX idx_products_tenant ON products(tenant_id);

Step 2: Create an app user (not the superuser)

CREATE USER app_user WITH PASSWORD 'strong_password';
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user;

Step 3: Enable RLS

ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
ALTER TABLE products ENABLE ROW LEVEL SECURITY;

Step 4: Create policies

CREATE POLICY tenant_isolation ON users
USING (tenant_id = current_setting('app.tenant_id')::UUID);

CREATE POLICY tenant_isolation ON orders
USING (tenant_id = current_setting('app.tenant_id')::UUID);

CREATE POLICY tenant_isolation ON products
USING (tenant_id = current_setting('app.tenant_id')::UUID);

Step 5: Set tenant in your app (every request)

-- At the start of each API request:
SET LOCAL app.tenant_id = 'uuid-of-current-tenant';

-- Now every query is automatically filtered:
SELECT * FROM users;
-- Only returns rows where tenant_id matches. Automatically.

That's it. No middleware. No query wrappers. PostgreSQL handles it at the database level.

Tomorrow: the complete system with signup, onboarding, admin access, cross-tenant queries, and all the edge cases. 3โญ.

@postgres
This media is not supported in the widget
VIEW IN TELEGRAM
PostgreSQL Pro | Database Mastery pinned ยซ๐Ÿ” Complete Multi-Tenant System โ€” Copy-Paste Ready What's inside: ๐Ÿ“ฆ COMPLETE SYSTEM (3 โญ) 1. TENANT MANAGEMENT - Tenant signup & provisioning - Tenant settings table - Plan/tier tracking - Soft delete & data retention 2. ROW-LEVEL SECURITY (FULLโ€ฆยป
๐Ÿค” Shared tables vs schema-per-tenant vs database-per-tenant. When to switch.

Here's the honest breakdown.

SHARED TABLES + RLS (start here)

โœ… One schema, one migration, one backup
โœ… Works with connection pooling out of the box
โœ… Scales to thousands of tenants easily
โœ… Simplest to build and maintain
โŒ Noisy neighbor risk (one tenant's huge query slows everyone)
โŒ Tenant data export is a filtered query, not a dump

Best for: Most SaaS apps. 1 to 10,000 tenants.

SCHEMA-PER-TENANT

โœ… Stronger isolation (separate tables)
โœ… Per-tenant backup/restore possible
โœ… Can customize schema per tenant
โŒ Migrations run N times (once per tenant)
โŒ Connection pooling gets complicated
โŒ 500+ schemas starts feeling heavy

Best for: Apps where tenants need different configurations. Regulated industries where you need to prove data separation.

DATABASE-PER-TENANT

โœ… Maximum isolation
โœ… Independent backup, restore, scaling
โœ… Can run different PostgreSQL versions
โŒ Connection management nightmare
โŒ $20-50/month per tenant for managed DBs
โŒ Migrations across 200 databases = pain

Best for: Enterprise customers who contractually require separate databases. Very few solo devs need this.

THE DECISION TREE:

Building your first SaaS?
โ†’ Shared tables + RLS. Don't overthink it.

Have a paying customer demanding isolation?
โ†’ Schema-per-tenant for that one customer.

Selling to banks/hospitals with compliance requirements?
โ†’ Database-per-tenant, but charge accordingly.

ONE TRICK: You can mix approaches.

Most tenants on shared tables.
Premium/enterprise tenants on dedicated schemas.
Same app code. RLS handles shared. search_path handles schema.

-- For shared tenants:
SET LOCAL app.tenant_id = 'uuid';

-- For premium tenants:
SET search_path TO tenant_acme, public;

Start simple. Upgrade individual tenants when they pay for it.

What's your current setup? Thinking about changing? ๐Ÿ‘‡

@postgres
๐Ÿ“Š Week 7 done. Multi-tenancy demystified.

This week:

โœ… Monday โ€” Three approaches compared
โœ… Tuesday โ€” RLS setup in 10 minutes
โœ… Wednesday โ€” ๐Ÿ’ฐ Complete multi-tenant system (3โญ)
โœ… Thursday โ€” When to switch approaches

The takeaway: start with shared tables + RLS. Upgrade individual tenants when they pay enough to justify it.

---

RUNNING TALLY

Week 1 | Auth | 3โญ | 1 โœ“
Week 2 | Jobs | 4โญ | 0
Week 3 | Performance | 3โญ | ?
Week 4 | Search | 3โญ | ?
Week 5 | Real-time | 3โญ | ?
Week 6 | Backups | 3โญ | ?
Week 7 | Multi-tenancy | 3โญ | ?

---

NEXT WEEK: YOU DECIDE

We've covered a lot of ground. What do you need most?

๐Ÿ”ด Caching with PostgreSQL (replace Redis for sessions, app cache)
๐ŸŸก File storage & uploads (store and serve files without S3)
๐ŸŸข Database migrations done right (zero-downtime, rollback-safe)
๐Ÿ”ต Monitoring & observability (know what your database is doing)

Vote below ๐Ÿ‘‡

See you Monday with whatever wins.

@postgres
๐Ÿ‘1
๐Ÿ“ You don't need S3 for file uploads. PostgreSQL handles it.

Controversial? Maybe. But hear me out.

If your app has:
- User avatars
- PDF invoices
- CSV exports
- Document attachments
- Images under 10 MB

You don't need a separate file storage service.

PostgreSQL has a built-in data type called bytea. It stores binary data directly in your database. And a feature called Large Objects for bigger files.

Why this matters for solo devs:

S3 + CloudFront: $5-25/month + complexity
Cloudinary: $0-89/month + API limits
DigitalOcean Spaces: $5/month minimum
PostgreSQL bytea: $0 extra (already in your database)

One less service. One less API key. One less thing that breaks at 2 AM.

This week:

๐Ÿ“… Tuesday โ€” How bytea and Large Objects work
๐Ÿ“… Wednesday โ€” ๐Ÿ’ฐ Complete file system (3โญ)
๐Ÿ“… Thursday โ€” When PostgreSQL storage isn't enough
๐Ÿ“… Friday โ€” Check-in

Let's build a file system that lives next to your data. Where it belongs.

How do you handle file uploads today? ๐Ÿ‘‡

@postgres