Back to articles
4 min read

Stop paying for queues and locks, use Postgres

Postgres does queues, locks, and document storage. You don't need six services for 50 users.

PostgresArchitectureDevEx

Your SaaS has 50 users and you’re running Redis for caching, RabbitMQ for queues, MongoDB for flexible schemas, and Pinecone for vector search. That’s $400/month and four more things to monitor.

Postgres does all of it. Queues, locks, document storage, full-text search, vector similarity, time-series data, pub/sub messaging. One database, battle-tested, already in your stack.

pgvector adds vector operations for embeddings and similarity search. pg_cron runs scheduled jobs. Extensions turn Postgres into whatever you need without adding services.

Why Choose Postgres Over Multiple Services?

Before diving into implementations, understand why consolidating on Postgres makes sense for most applications.

Postgres as a Queue

SKIP LOCKED lets multiple workers grab jobs without blocking each other:

SELECT * FROM jobs
WHERE status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1;

Workers run this in a loop. No race conditions, ACID guarantees, no message broker.

Distributed locks

Advisory locks coordinate tasks across instances:

SELECT pg_try_advisory_lock(12345);

Returns true if you got the lock, false if another process holds it. Releases automatically on disconnect. No Redis needed.

Document storage

JSONB stores flexible data and queries it fast with GIN indexes:

CREATE TABLE events (
  id SERIAL PRIMARY KEY,
  data JSONB
);
CREATE INDEX ON events USING GIN(data);

-- Query nested fields
SELECT * FROM events WHERE data->>'button_id' = 'signup';

Works well for feature flags, user preferences, and event tracking. Performance matches document stores under 100GB.

Performance is fine

SKIP LOCKED queues handle thousands of jobs per second. Advisory locks take microseconds. JSONB stays fast under 100GB.

At 200 requests/minute with 50 users, Postgres handles everything. At 50,000 requests/second, you’ll need specialized services. You’ll also have the revenue to pay for them.

pgvector handles embeddings and similarity search:

CREATE EXTENSION vector;
CREATE TABLE docs (embedding vector(1536));
CREATE INDEX ON docs USING ivfflat (embedding vector_cosine_ops);

-- Find similar
SELECT * FROM docs ORDER BY embedding <=> '[0.1, 0.2, ...]'::vector LIMIT 5;

Handles 500k embeddings with sub-100ms queries. Pinecone becomes worth it at millions of searches per second.

Built-in search with stemming, ranking, and highlighting:

CREATE TABLE articles (
  search_vector tsvector GENERATED ALWAYS AS
    (to_tsvector('english', title || ' ' || content)) STORED
);
CREATE INDEX ON articles USING GIN(search_vector);

SELECT * FROM articles WHERE search_vector @@ to_tsquery('postgres & performance');

Works fine for millions of documents. Elasticsearch matters at billions.

Real costs

With separate services:

  • Postgres (DigitalOcean managed, single node): $15/month
  • RabbitMQ (CloudAMQP): $50/month
  • Redis (Upstash or ElastiCache): $25/month
  • MongoDB Atlas (M5 shared): $25/month
  • Pinecone (Standard with minimum): $70/month
  • Total: $185/month

With just Postgres:

  • Postgres: $15/month
  • Total: $15/month

Save $170/month and eliminate four services to monitor, update, and debug.

The real cost isn’t the hosting fees. It’s the time spent managing backups, handling version upgrades, debugging cross-service issues, and writing infrastructure code for each one.

I’ve seen companies burn money not on infrastructure, but on finding people to maintain the mess they demanded. They’d hire cheap contractors at $300/month, cycle through them one after another, never industrialize anything. While they fought their architecture, competitors shipped features and stole users.

When to add services

RabbitMQ, Redis, Elasticsearch, and Pinecone are excellent tools built for specific problems. You’ll use them eventually.

But at 50 users, your constraint isn’t infrastructure. It’s time and money. Every service costs both:

  • Monthly fees when you’re pre-revenue
  • Setup and monitoring time
  • Mental overhead tracking health
  • Cross-system debugging

Add dedicated services when you hit:

  • 10,000+ jobs/second consistently
  • Complex pub/sub routing needs
  • Tens of millions of embeddings
  • Billions of documents to search

When you reach those numbers, you’ll have revenue to migrate properly. Until then, focus on building something people pay for.

Bottom line

Specialized services exist for good reasons. RabbitMQ handles millions of messages per second. Redis serves microsecond-latency cache hits. Elasticsearch powers search at massive scale.

You’ll use them eventually. Just not today.

Today, you need to ship features and find paying customers. Postgres handles queues, locks, vector search, and document storage well enough to get you there.

When you outgrow it, you’ll have the revenue to migrate properly. You’ll have users who depend on you, which means you can justify the engineering time to split services correctly.

Start simple. Add complexity when you measure the need, not when you imagine it.

Reality is often more nuanced. But me? Nuance bores me. I'd rather be clear.

Comments