Good, I am glad to see you are approaching the engineering trade-offs in the right way.
Within 10 digits, you have plenty of headroom. Two thousand calls per day, a thousand days, that sounds like 2 million serial members. Not even close to billions. So we have room to allocate bits for other things, such as timestamp, or server identity.
Your approach that used a database to hand out unique serials was a good one. We can adapt it to work here.
First, hold an election to see which host will be leader. Simplest approach would be oldest man standing, where the winner has oldest timestamp of joining the service, and he gets to keep being leader as long as we see current heartbeat time stamps from him. An alternative would be to use zookeeper, or preferably the Raft algorithm for distributed consensus.
OK, we have a leader. Leader can produce timestamp serial numbers for his own consumption. Followers will request blocks of unique serials from the leader. It would be convenient to request perhaps 100 or 128 serials at a time. A follower will always have a partial block that it is working through for current requests, and should also have a spare block that has not yet been used, but can be pressed into service at any moment. When that happens, follower will immediately make a request to the leader for a fresh block of serials. The size of the block, and the number of calls per second, determine two things. They affect how much time a Raft distributed consensus decision can spend before it must complete successfully, and they also determine how closely the sort order will reflect true timestamp order.
If you're keen to blow a few bits on the individual host identities,
probably best to make those the low-order bits.
Then we have a "mostly" chronological sort order.
It's certainly not a causal order, since the only
thing really sorted about it is the order in which
blocks of serial were handed out.
If that happens in blocks of 128,
then masking out seven bits reveals how those blocks were handed out.
And we get a
partial order
if we interpret serials as (block_num, increment) tuples,
with increments being less than 128.
BTW I assume that 50k monthly transactions are not arriving "one per minute", but may arrive in clumps, for example in a market-close or end-of-quarter burst. Put another way, I did not read "uniform Poisson arrivals" into the OP requirements.