JOURNAL
--:--:-- IST
Engineering 22 Sep 2025 · Priya · 9 min read

Postgres is enough, until it isn't.

A field guide to the moment you should stop reaching for a new database — and when to.

§ 01

Postgres goes further than you think.

Most teams reach for a specialised database too early. They read a blog post about how some unicorn company runs everything on a streaming system and assume their CRUD app needs the same. It does not.

On the systems we run, Postgres comfortably handles single-instance setups well into the millions of rows with simple indexing, the high tens of millions with proper partitioning and a thoughtful schema, and into hundreds of millions when you accept the operational complexity of read replicas and connection pooling. None of that requires moving off Postgres.

We have shipped real-time-feeling dashboards on raw Postgres with materialised views and a careful refresh strategy. We have run multi-tenant SaaS with row-level security and tenant-keyed partitioning. We have done event-sourced workflows with an append-only table and a few smart indexes. Postgres is, in our experience, very rarely the bottleneck.

§ 02

Three signals to start considering alternatives.

There are three honest signals that you should consider adding (not replacing — adding) another data store. Each one needs to be reproducible, measurable, and not solvable by indexing.

Signal 1 — Hot path latency you cannot index away. If a small set of queries serves the majority of traffic and they cannot be made fast through indexing, materialised views, or a query rewrite, a key-value cache in front of Postgres usually solves it. Redis. That's the answer.

Signal 2 — Write throughput you cannot batch. If sustained writes outpace what a single Postgres primary can absorb and you cannot shard or batch your writes, then a purpose-built time-series or column store may help. We have used ClickHouse for analytics dashboards where the read pattern is OLAP-shaped. Postgres still owns the operational data.

Signal 3 — Search beyond LIKE. If you need ranked full-text search across long documents with synonyms and stemming, Postgres's built-in full-text is good enough until it isn't, and at that point a dedicated search engine is the right call. Until then, do not.

§ 03

The real migration cost.

Moving primary data off Postgres is one of the most expensive things you can do. You pay it in three currencies. Engineering hours, because every query, ORM mapping, transaction boundary, and integrity constraint has to be re-thought. Operational complexity, because you now run two systems and the failure modes multiply. And cognitive load on every new engineer who joins, forever.

We have seen teams pay all three of those costs to escape a problem they could have indexed away in an afternoon. We have also seen teams put it off too long and run into a real wall. The judgment call is what we get paid for.

§ 04

A practical playbook.

If you're feeling pain, do these in order and stop the moment the pain goes away.

Step 1. Run EXPLAIN ANALYZE on the slow queries. Most of the time, there's an index that doesn't exist yet, or a join that's doing something dumb.

Step 2. Look at pg_stat_statements. The hot queries are not always the ones you think.

Step 3. Add Redis in front of the hot read path. Cache invalidation is hard, but cheaper than data-store migration.

Step 4. Move read traffic to replicas if write contention is the issue, not raw query speed.

Step 5. Partition the largest tables by tenant or by date.

Step 6. Only after all of the above: introduce a second store for the specific workload that doesn't fit Postgres's shape.

We have shipped many production systems and only reached step 6 a handful of times. It's a long way down the list for a reason.

§ Continue reading

More from the journal.

Short pieces on engineering, products, and the work — written when we have something we'd want to read.

Chat with us