There are several improvement areas with above code.
Let's review them one-by-one from bottom up:
GetXYZ
private static async Task GetXYZ(string parameter)
{
await Task.Run(() => {
var svc = new WebApiService();
var msg1 = svc.GeXYZ(parameter);
if (string.IsNullOrWhiteSpace(parameter)) return;
Console.WriteLine($"XYZ {parameter}");
});
}
A Task can represent either an asynchronous work or a parallel work. An async work can be considered as non-blocking I/O operation. Whereas the parallel work can be treated as (blocking) CPU operation. (It is oversimplified but it works for us now)
In the above code the Task.Run tells to .NET runtime that this piece of code should run a dedicated thread (in reality this is not as simple as this, but let me simplify the model now). It means that the passed delegate should run a dedicated CPU core.
But inside your delegate you are making a blocking I/O operation. So you have created a delegate, you have moved that code to a dedicated core, which will block until the network driver will finish the I/O operation.
You are just wasting a lot of valuable resource. A better alternative would look like this:
private static async Task GetXYZ(string parameter)
{
var svc = new WebApiService();
var msg1 = await svc.GeXYZAsync(parameter);
if (string.IsNullOrWhiteSpace(parameter)) return;
Console.WriteLine($"XYZ {parameter}");
}
Here you are making a non-blocking I/O operation. The network driver fires of a network call asynchronously and returns immediately. When it completes it will notify the ThreadPool. (Because here we are calling await against an asynchronous work that's why we are not blocking the caller Thread.)
In short: Network driver can do let's say 1000 concurrent network operations at the same time. While the CPU can perform as many operations in parallel as many cores it has.
RunTasks
(I guess you can find a better name for this.)
Here with the Task.WhenAll you are already asking the .NET runtime to run them in parallel. If they are asynchronous operations then the network driver will take care of them to run them in parallel. In case of parallel work CPU tries to schedule them on different cores but there is no guarantee that they will run in parallel. They might run only concurrently (with context-switches).
RunLoad
(I guess you can find a better name for this.)
Here with the Parallel.For you try to create as many blocking parallel task as many the loadSize value. If you have 100 Tasks for 8 CPU cores then it is inevitable to have context-switches. Again your current implementation is highly CPU-bound even though you try to perform massive I/O operations.
A better alternative:
private static async Task RunLoad(int loadSize)
{
var clients = new List<Task>();
for (int j = 0; j < loadSize; j++)
{
clients.Add(RunTasks());
}
await Task.WhenAll(clients);
}
In this way all your client's all requests might run in parallel if your network driver support that many outgoing requests.
This WhenAll will be finished when all operation has finished. If you monitor them in "real-time" when any of them finished then you rewrite your method in the following way:
static async Task Main(string[] args)
{
Console.WriteLine("Press any key to start...");
Console.ReadKey();
Console.WriteLine();
await foreach (var test in RunTests(10))
{
//NOOP
}
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
private static async IAsyncEnumerable<Task> RunTests(int loadSize)
{
var requests = new List<Task>();
for (int j = 0; j < loadSize; j++)
{
requests.Add(GetLoanByLoanId("7000002050"));
requests.Add(GetEnvelopesForLoan("7000002077"));
requests.Add(GetLoanDataByLoanId("7000002072"));
}
while (requests.Any())
{
var finishedRequest = await Task.WhenAny(requests);
requests.Remove(finishedRequest);
yield return finishedRequest;
}
}