
We need to talk about something that's been bothering me for months. I've been watching indie hackers and startup founders frantically cobbling tog...
For further actions, you may consider blocking this person and/or reporting abuse
haha!
When NoSQL became popular, many thought NoSQL is superior to SQL. While there are scenarios where this can be true, it's only for the most trivial use cases or at extreme scale where performance has much higher priority than functionality. The latter is probably why many thought NoSQL is superior in general, but only very few ever need that kind of distributed performance some NoSQL databases can offer. SQL databases like Postgres offer a shitload of extremely useful functionality and can scale, just somewhat less than specialized NoSQL databases at the cost of extremely limited functionality.
been loving this energy tbh, made me rethink all those times i jumped straight to the fancy stacks instead of just trusting one thing that actually works you think sticking to boring tech too long ever backfires or nah
Agree with this so much. I run everything from stateful AI flows to realtime dashboards with just Postgres - curious if anyone actually hit a limit that Postgres couldn't handle?
Finally something good on dev.to even it has op product plug.
Very well written!.
I use postgress in enterprise environments, and mysql in private environments (considering moving).
I do believe that the postgress events will still require pub/sub when horizontally scaling.
Redis works great for caching, I have not tried postgress solution that you posted, so will definitely give this a try, that being said, I think there are more libs that support redis out of the box
Love this. Can’t agree more.
I completely agree with your sentiment, but I think you're overestimating the cost and complexity of Redis or Valkey: You can install it for free on a $5 VPS in an hour (or 5 minutes if you just use the default configuration) and it works pretty much as expected out of the box. It gives you both a key-value cache and a pubsub queue system.
So doing everything in the DB is absolutely viable and probably the best approach for a MVP, but I've never felt like Redis was a burden.
nice work
Let give it a try
*Great Read! But How Do You Handle CI/CD With This Approach?
*
@shayy Your post about using Postgres for everything really resonated with me. The UserJot example proves this works in production, but I'm curious about the operational side that wasn't covered.
When Postgres is handling your queues, search indexes, real-time notifications AND core data, how do you manage deployments safely? A single schema migration could impact job processing, search performance, and real-time features all at once. Do you use tools like Flyway for migrations, or have you found simpler approaches? And how do you test these interdependent features - especially when simulating production load across all the different "services" within Postgres?
I suspect managing CI/CD for one well-configured Postgres instance might actually be simpler than coordinating deployments across Redis + RabbitMQ + Elasticsearch + your main database. Would love to hear your real-world experience with testing and deployment strategies!
Thank you for this!
What about storing files (large binary data) in Postgres? This is my current usecase for MongoDB - storing user uploads. Can this be substituted with Postgres as well?
Yes, it absolutely can. I built a document management system using Postgres and have stored binaries in Postgres for other applications too. It works very well. I even implemented binary chunks to allow me to store very large binaries.
Try and experiment with the bytea (byte array) data type for your file storage, can possibly store up to 1GB on a single column.
postgresql.org/docs/7.1/jdbc-lo.html
They are called BLOBs, large binary objects stored in DB tables. I'm not sure this is the best way to do it, but it's possible
I would not put all the data is a single database. I go for task specific schemas.
There are benefits other than fulfilling the basic job in the specialized database systems. Scaling isn't the only reason to use them.
If you only need the basic job Postgres can be the solution.
I would say that if you want to divide your data, you should do it by domain. So you can have multiple postgress istances/schemas organized by domain. If you do that, all the points in the article are still valid except for the pub/sub part.
I have the feeling that with this approarch, the completely is reduced so much that your performance will be ok even with a monolithic approach for a long time though
I would not bother with domain separation in most cases because it will not have the performance boost or scaling possibilities you think it has.
I would rather prefix the table name with the domain name to make it easier to scan through the table overview, and detect domain crossing foreign keys.
I would do domain separation when a high level of data security is needed.
Splitting it up by tasks makes the schemas single purpose. The full text schema is going to require more data because of the token splitting. the queue schema is going to be very scalable because the amount of rows is going to fluctuate.
Because the schema's are single purpose it will be easier to draw conclusions when things go wrong.
I start with SQLite3, then move up to Postgres. This makes it even easier than having to start with a DB server. I personally use SQLite for my personal projects, I don't have a valid need yet for a "bigger" SQL server for them. (I'm in Oracle for work all day long.). Just for fun, I want a self hosted Postgres server.
Discord uses scyllaDB.
They also use Postgres for core relational data.
This post smells quite a lot as chatgpt
I agree having one PostgreSQL is a very nice syarting point but I still wonder is there a comparison/benchmark with numbers full text search in lucene vs PG and same with redis and kafka.
I couldn't agree more! Great post! 👏
If I were to do a project that I loved, I think I'd try to use it,nice article。
Nice one. Such a free, wonderful and extensible, high performance database. The extensions (including full-text search, geospatial and vector support) are truly out this world.
@shayy GREAT, well laid out article explaining WHY you really don't need much else.
I really like SQL Server but Postgres leaves it in he dust for features and is my go-to DB now.