DEV Community

Cover image for Postgres is Too Good (And Why That's Actually a Problem)

Postgres is Too Good (And Why That's Actually a Problem)

Shayan on June 13, 2025

We need to talk about something that's been bothering me for months. I've been watching indie hackers and startup founders frantically cobbling tog...
Collapse
 
code42cate profile image
Jonas Scholz

Collapse
 
shayy profile image
Shayan

haha!

Collapse
 
zoechi profile image
Günter Zöchbauer

When NoSQL became popular, many thought NoSQL is superior to SQL. While there are scenarios where this can be true, it's only for the most trivial use cases or at extreme scale where performance has much higher priority than functionality. The latter is probably why many thought NoSQL is superior in general, but only very few ever need that kind of distributed performance some NoSQL databases can offer. SQL databases like Postgres offer a shitload of extremely useful functionality and can scale, just somewhat less than specialized NoSQL databases at the cost of extremely limited functionality.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

been loving this energy tbh, made me rethink all those times i jumped straight to the fancy stacks instead of just trusting one thing that actually works you think sticking to boring tech too long ever backfires or nah

Collapse
 
dotallio profile image
Dotallio

Agree with this so much. I run everything from stateful AI flows to realtime dashboards with just Postgres - curious if anyone actually hit a limit that Postgres couldn't handle?

Collapse
 
ha_aang_kiss profile image
Ha Aang

Finally something good on dev.to even it has op product plug.

Collapse
 
shaunjvn90 profile image
Shaun Jansen Van Nieuwenhuizen

Very well written!.
I use postgress in enterprise environments, and mysql in private environments (considering moving).

I do believe that the postgress events will still require pub/sub when horizontally scaling.

Redis works great for caching, I have not tried postgress solution that you posted, so will definitely give this a try, that being said, I think there are more libs that support redis out of the box

Collapse
 
stevepotter profile image
Stephen Potter

Love this. Can’t agree more.

Collapse
 
nicolus profile image
Nicolus

I completely agree with your sentiment, but I think you're overestimating the cost and complexity of Redis or Valkey: You can install it for free on a $5 VPS in an hour (or 5 minutes if you just use the default configuration) and it works pretty much as expected out of the box. It gives you both a key-value cache and a pubsub queue system.

So doing everything in the DB is absolutely viable and probably the best approach for a MVP, but I've never felt like Redis was a burden.

Collapse
 
nadeem_zia_257af7e986ffc6 profile image
nadeem zia

nice work

Collapse
 
naviny0 profile image
Navin Yadav

Let give it a try

Collapse
 
alvarin32qwerty profile image
Alvarin

*Great Read! But How Do You Handle CI/CD With This Approach?
*

@shayy Your post about using Postgres for everything really resonated with me. The UserJot example proves this works in production, but I'm curious about the operational side that wasn't covered.

When Postgres is handling your queues, search indexes, real-time notifications AND core data, how do you manage deployments safely? A single schema migration could impact job processing, search performance, and real-time features all at once. Do you use tools like Flyway for migrations, or have you found simpler approaches? And how do you test these interdependent features - especially when simulating production load across all the different "services" within Postgres?

I suspect managing CI/CD for one well-configured Postgres instance might actually be simpler than coordinating deployments across Redis + RabbitMQ + Elasticsearch + your main database. Would love to hear your real-world experience with testing and deployment strategies!

Collapse
 
erin_boeger_105f19a598459 profile image
Erin Boeger

Thank you for this!

Collapse
 
aloisseckar profile image
Alois Sečkár

What about storing files (large binary data) in Postgres? This is my current usecase for MongoDB - storing user uploads. Can this be substituted with Postgres as well?

Collapse
 
peter_lamb_fa3fc2089b698c profile image
Peter Lamb • Edited

Yes, it absolutely can. I built a document management system using Postgres and have stored binaries in Postgres for other applications too. It works very well. I even implemented binary chunks to allow me to store very large binaries.

Collapse
 
chigozie_okali_b5e6acb2ab profile image
Chigozie Okali • Edited

Try and experiment with the bytea (byte array) data type for your file storage, can possibly store up to 1GB on a single column.

Collapse
 
jo_2f32c3d4721e351c650a4d profile image
Jo

postgresql.org/docs/7.1/jdbc-lo.html
They are called BLOBs, large binary objects stored in DB tables. I'm not sure this is the best way to do it, but it's possible

Collapse
 
xwero profile image
david duymelinck

I would not put all the data is a single database. I go for task specific schemas.

There are benefits other than fulfilling the basic job in the specialized database systems. Scaling isn't the only reason to use them.
If you only need the basic job Postgres can be the solution.

Collapse
 
jurgob profile image
Jurgo Boemo

I would say that if you want to divide your data, you should do it by domain. So you can have multiple postgress istances/schemas organized by domain. If you do that, all the points in the article are still valid except for the pub/sub part.
I have the feeling that with this approarch, the completely is reduced so much that your performance will be ok even with a monolithic approach for a long time though

Collapse
 
xwero profile image
david duymelinck

I would not bother with domain separation in most cases because it will not have the performance boost or scaling possibilities you think it has.
I would rather prefix the table name with the domain name to make it easier to scan through the table overview, and detect domain crossing foreign keys.
I would do domain separation when a high level of data security is needed.

Splitting it up by tasks makes the schemas single purpose. The full text schema is going to require more data because of the token splitting. the queue schema is going to be very scalable because the amount of rows is going to fluctuate.
Because the schema's are single purpose it will be easier to draw conclusions when things go wrong.

Collapse
 
rock_brown_f9fb42569eab6a profile image
Rock Brown

I start with SQLite3, then move up to Postgres. This makes it even easier than having to start with a DB server. I personally use SQLite for my personal projects, I don't have a valid need yet for a "bigger" SQL server for them. (I'm in Oracle for work all day long.). Just for fun, I want a self hosted Postgres server.

Collapse
 
sahil1330 profile image
sahil1330

Discord uses scyllaDB.

Collapse
 
shayy profile image
Shayan

They also use Postgres for core relational data.

Collapse
 
diogo_klein_b7b75a436f7e5 profile image
Diogo Klein

This post smells quite a lot as chatgpt

Collapse
 
ozkanpakdil profile image
özkan pakdil

I agree having one PostgreSQL is a very nice syarting point but I still wonder is there a comparison/benchmark with numbers full text search in lucene vs PG and same with redis and kafka.

Collapse
 
alexcolls profile image
Alex Colls Outumuro

I couldn't agree more! Great post! 👏

Collapse
 
luce5in3 profile image
Luce5in3

If I were to do a project that I loved, I think I'd try to use it,nice article。

Collapse
 
chigozie_okali_b5e6acb2ab profile image
Chigozie Okali

Nice one. Such a free, wonderful and extensible, high performance database. The extensions (including full-text search, geospatial and vector support) are truly out this world.

Collapse
 
cali_lafollett_bcca100b70 profile image
Cali LaFollett

@shayy GREAT, well laid out article explaining WHY you really don't need much else.

I really like SQL Server but Postgres leaves it in he dust for features and is my go-to DB now.