@MarcusMiller sums up very well why you should not expect to be able to do 10000 requests/sec from a single machine. But maybe you are not asking that. Maybe you have 100 machines that can be used for this task. And then it becomes feasible.
On each of the 100 machines run something like:
ulimit -n 1000000
shuf huge-file-with-urls |
parallel -j1000 --pipe --roundrobin 'wget -i - -O - | jq .'
If you need to run the urls more than once, you simple add more shuf huge-file-with-urls. Or:
forever shuf huge-file-with-urls | [...]
(https://gitlab.com/ole.tange/tangetools/-/tree/master/forever)
GNU Parallel can run more than 1000 jobs in parallel, but it is unlikely you computer can process that many. Even just starting the 1000000 sleeps (See: https://unix.stackexchange.com/a/150686/366317) pushed my 64 core machine to the limit. Starting 1000000 simultaneous processes that actually did something would have made the server stall completely.
A process will migrate from one core to another, so you cannot tell which core is running what. You can limit which core a process should run on with taskset, but that only makes sense in very specialized scenarios. Normally you simply want your task to be moved to a CPU core that is idle, and the kernel does a good job of that.