
Look, I need to show you something that might hurt your feelings:
CREATE TABLE MongoDB (
_id UUID PRIMARY KEY,
data JSONB
);
Enter f...
For further actions, you may consider blocking this person and/or reporting abuse
Jsonb is well known, but so so slow compared to doing aggregations with mongodb. For simple use cases yes, you don't need that performance, but by simple I mean: your personal blog. In 12 years I have never encountered any enterprise app where jsonb was a good choice in the long run.
I agree , I tried this with gin index to try if it really is faster than mongo.
it wasn't
Writes were extremly slow
Reads and updates were keeping up with thehelp of indexes.
Git link - github.com/faeiiz/Jsonb-Postgresql
The only time i have used jsonb was when i was capturing update logs in a log table in postgress and storing data of the user had changed. 2 collumns -> old_data and new_data(updated value).
did'nt want to store the whole object.
That sounds like a good use case
Thx this is first time I'm hearing about jsonb. very cool.
haha thank you :)
pretty epic seeing someone call out most nosql hype like this - ever find yourself picking tech just to avoid migrations, or is it usually about performance?
Sometimes avoiding migrations is a nice side effect, but it’s mostly about getting solid performance for the use case.
I’ve used JSONB that way for tons of event logs and custom app configs - it honestly feels like cheating sometimes. What’s the most complex thing you ever modeled in JSONB?
Love that use case. I once modeled a dynamic form builder with nested rules and it handled it beautifully.
While jsonb is great, just from the examples I see the overuse of the column type.
Create a preferences table with an id and the name of preference, and have a pivot table with the ids of preferences and user ids.
Product catalog and form submissions, a similar solution.
I see more a shortcut to not do data normalization, than splitting document structured data from relational data.
While they are pushing you to their Atlas solution, there is still the free community server. No need to pay, just a willingness to set it up.
From the post you made i see you had a good experience with jsonb, and that is great. The problem I have that the information you provide is skewed by the experience. it is a trap I fell for many times, so I'm as fallible as you. Over time I learned all tools have their good and bad sides, and we have to use them for their good sides.
That is new to me! Really cool stuff! thanks for sharing!
Sure thing!
Brilliant deep dive! This perfectly showcases how versatile and powerful Postgres + JSONB can be for modern app needs. A great reminder that sometimes one optimised tool can replace a stack of complexity. Thanks for sharing!
This is really great article. A gem in a pile of AI crap usually posted on DEV.to nowadays.
And I agree that JSONB brings the best of both worlds. Moved from MongodDB two years ago using exactly the same strategy. We also combine this with static JSON files stored in buckets storage.
The big plus is also full text search. This cannot be achieved in MongoDB but is easily done with JSONB.
Click baity title with no real heavy loaded aggregations example, nosql DBS shine best with heavy and complex searches. Also as others have stated what you are proposing here it's a trap that leads to no data normalization. In short it might seem attractive at the start, a nightmare at the end
This approach works only on very small databases because indexing this structure is more challenging and limited compared to document databases like MongoDB. Using UUIDv4 for IDs can be harmful (UUID PRIMARY KEY DEFAULT gen_random_uuid()), as these IDs are scattered throughout the table, which can negatively impact performance, especially with shared buffer caches and B-Trees.
But there is worse with the single JSONB. I've written a series on the pain points people experience when using PostgreSQL as a document database. The lack of Index Only Scan, the lack of range or sort covering with GIN index, the read and write amplification: dev.to/mongodb/no-hot-updates-on-j...
This reminds me of back in the day when XML support was added to MSSQL, now we could store SOAP responses directly into our columns, it was magic. Looking back, it was bloated and needed wizardry to parse elements. In fact i see a circle, soap+xml, rest+json, grpc+protobuf and now json-rpc+json
Mongodb can handle 65536 concurrent connections by default..postgresql 100, if you increase it, it affect performance.
Sharding is hard to implement in postgresql but in mongodb its super easy.
“By default”??? Changing the default number of connections is pretty darned trivial.
Absolutely brilliant breakdown.
As a developer who's worked extensively with both MongoDB and Postgres, I couldn't agree more — jsonb is criminally underrated. The combination of SQL power, schema flexibility, and indexing makes it a true Swiss army knife, especially for event logs, dynamic metadata, and hybrid schemas.
Also admire how you structured the migration Strategy — I’ve done similar for moving audit logs and form builders into Postgres, and performance actually improved once we tuned GIN indexes correctly.
I think PostgreSQL isn't just relational anymore — it's relational with superpowers. ✨
Sounds great. SQL & NoSQL together. 👌🏼👌🏼
I think JSONB works well for simple use cases like user or tenant settings that are mostly static - data that rarely changes and doesn't need aggregation or complex queries.
Great Post. Have you benchmarked query performance between your Postgres **JSONB **model and the previous Mongo setup? Would love to know if Postgres held up under load.
Then you slap yourself into the face and get back to use a proper schema and relations in your relational DB. :)
I have been seeing jsonb all times, took it for just a json collector type. But you gave me beautiful insight into it value.. Thank you very much
Been doing this for over 5 years. Really handy and gets the job done.