Working on DDD lately got me thinking about how it preforms on large scale systems.
Watching many tutorials and reading many articles makes it look fun and promising for small projects.
I have three questions for three different categories of DDD.
Lets take the famous example of eCommerce, Say Amazon.
Data Duplication
The core domain would be Products (for this example) and warehouse, sales, orders, etc will all have a copy of product for their bounded context.
I read somewhere that Amazon has ~600 Million products on their site (not including soft deleted ones probably).
Say that all the contexts are using the same structure of Product that cost 12 bytes.
600 Million Products * 12 bytes = 7.2 GB (7,200,000,000 bytes) for each service.
And that's products alone, I'm sure users are duplicated as well and manufacturers and what not.
Does large scale applications simply duplicated data from given Created type of events ?
Domain Events Bombarding
There are more than 1.6 Million packages shipped each day.
That only makes you wonder how's Amazon handling Eventual Consistency with non-stop events coming from every direction.
Assuming that they use Outbox and Inbox like patterns, how do they manage to handle all events on time and not "collect event handling debt" ?
Transactional Overhead
Basically what the titles says, how do they avoid transactional over head ? For example by consuming pending events, handling orders or any transactional behaviors.
I'm sure that the answer can be more storage, more cores, more load balancer and more physical solutions/server solutions, I sure there are technological and architectural solutions for that and I really do wonder what are they that I don't see in small projects using DDD.