Skip to main content
added 92 characters in body
Source Link
J_H
  • 7.9k
  • 1
  • 18
  • 28

instantly mark 700 of these signals

It's unclear what the verb "mark" means here. I will assume it means "run some arbitrary function".

You could use sortedcontainers -- it's C code on the inside. Store (price, id) tuples in a SortedList. Or use those as keys in a SortedDict. The data structure lets you quickly find matching prices, in O(log N) time.

Perhaps humans tend to choose round numbers for limit prices. Then you may want a sorted dict to map price to a list of IDs.

Your problem is a good match for an RDBMS table, with an index on the price column. I will note in passing that sqlite can use either filesystem or memory for backing store.

Your problem is a good match for kafka consumer(s) that maintain a SortedDict (or shards of such a dict), listen for price tick messages, and take action upon finding matches.

Suppose you have K shards on K servers. Pick some small discretization interval intvl. Map a price to a shard using int(price / intvl) % K. Now, instead of broadcasting a given tick to K servers, you can unicast it to the single responsible server. That way, during a burst of rapid price movements spanning more than one interval, you are likely to keep a bunch of servers busy doing useful work (rather than filtering messages that don't require an action from them).

instantly mark 700 of these signals

It's unclear what the verb "mark" means here. I will assume it means "run some arbitrary function".

You could use sortedcontainers -- it's C code on the inside. Store (price, id) tuples in a SortedList. Or use those as keys in a SortedDict. The data structure lets you quickly find matching prices, in O(log N) time.

Perhaps humans tend to choose round numbers for limit prices. Then you may want a sorted dict to map price to a list of IDs.

Your problem is a good match for an RDBMS table, with an index on the price column.

Your problem is a good match for kafka consumer(s) that maintain a SortedDict (or shards of such a dict), listen for price tick messages, and take action upon finding matches.

Suppose you have K shards on K servers. Pick some small discretization interval intvl. Map a price to a shard using int(price / intvl) % K. Now, instead of broadcasting a given tick to K servers, you can unicast it to the single responsible server. That way, during a burst of rapid price movements spanning more than one interval, you are likely to keep a bunch of servers busy doing useful work (rather than filtering messages that don't require an action from them).

instantly mark 700 of these signals

It's unclear what the verb "mark" means here. I will assume it means "run some arbitrary function".

You could use sortedcontainers -- it's C code on the inside. Store (price, id) tuples in a SortedList. Or use those as keys in a SortedDict. The data structure lets you quickly find matching prices, in O(log N) time.

Perhaps humans tend to choose round numbers for limit prices. Then you may want a sorted dict to map price to a list of IDs.

Your problem is a good match for an RDBMS table, with an index on the price column. I will note in passing that sqlite can use either filesystem or memory for backing store.

Your problem is a good match for kafka consumer(s) that maintain a SortedDict (or shards of such a dict), listen for price tick messages, and take action upon finding matches.

Suppose you have K shards on K servers. Pick some small discretization interval intvl. Map a price to a shard using int(price / intvl) % K. Now, instead of broadcasting a given tick to K servers, you can unicast it to the single responsible server. That way, during a burst of rapid price movements spanning more than one interval, you are likely to keep a bunch of servers busy doing useful work (rather than filtering messages that don't require an action from them).

Source Link
J_H
  • 7.9k
  • 1
  • 18
  • 28

instantly mark 700 of these signals

It's unclear what the verb "mark" means here. I will assume it means "run some arbitrary function".

You could use sortedcontainers -- it's C code on the inside. Store (price, id) tuples in a SortedList. Or use those as keys in a SortedDict. The data structure lets you quickly find matching prices, in O(log N) time.

Perhaps humans tend to choose round numbers for limit prices. Then you may want a sorted dict to map price to a list of IDs.

Your problem is a good match for an RDBMS table, with an index on the price column.

Your problem is a good match for kafka consumer(s) that maintain a SortedDict (or shards of such a dict), listen for price tick messages, and take action upon finding matches.

Suppose you have K shards on K servers. Pick some small discretization interval intvl. Map a price to a shard using int(price / intvl) % K. Now, instead of broadcasting a given tick to K servers, you can unicast it to the single responsible server. That way, during a burst of rapid price movements spanning more than one interval, you are likely to keep a bunch of servers busy doing useful work (rather than filtering messages that don't require an action from them).