No, it's not correct. Parallel.ForEach is meant for data parallelism. It will create as many worker tasks as there are cores on the machine, partition the input data and use one worker per partition. It doesn't know anything about async operations, which means your code is essentially :
Parallel.ForEach(ids, async void (int id) =>
{
await this.doStuff(id);
await this.doAnotherStuff(id);
});
On a quad machine, this will fire off 1M requests, 4 at a time, without waiting for any of them. It could easily return before any of the requests had a chance to complete.
If you want to execute multiple requests in a controlled manner, you could use eg an ActionBlock with a specific degree of parallelism, eg :
var options=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10,
BoundedCapacity=100
}
var block=new ActionBlock<string>(async id=>{....},options);
foreach(var id in ids)
{
await block.SendAsync(id);
}
block.Complete();
await block.Completion;
The block will process up to 10 concurrent requests. If the actions are really asynchronous, or the async waits are long, we can easily use a higher DOP than the number of available cores.
Input messages are buffered, which means we could end up with 1M requests waiting in the input buffer of a slow block. To avoid this, the BoundedCapacity setting will block SendAsync if the block can't accept any more inputs.
Finally, the call to Complete() tells the block we're done and it should process any remaining messages in its input buffer. We await for them to finish with await block.Completion
async voidcalls that nobody will await. It will return almost immediatelly, perhaps before any of the requests had a chance to even start. The compiler will also issue a warning thatProcessApiRequesthas no await and will run concurrently.