I have a bash function that mainly curls an endpoint for a set of links, and again curls each of those links (for another set of links) recursively.
task() {
  link="$1"
  response=$(curl "$link")
  
  # save response data to file...
  # process response to find another set of links and save to variable x...
  if condition
      echo "$x" | while read -r link
        task "$link"
}
The above code yields to ~100 curl operations which take a long time. Mainly the endpoint's response time (not processing) is the bottleneck.
Conditions
- We have 0 info about how many links are there. We can't have a child link without curling it parent link first.
- Each response of curl linkshould be atomically saved to a file
Is it even possible to parallelize this procedure? I'm OK if the solution needs (or is more elegant with) GNU parallel or some other external tool.

curlproduce multiple child links (in which case they might be parallelizable), or just one (in which case they can only be sequential)? Also, if the network is the bottleneck, would parallelization actually help at all (since multiple simultaneouscurls will just compete for bandwidth)?parallelorxargs -Pwon't work with a function (not directly, anyway - you could make it runsh -c 'task ...'with a .bashrc that defined the function) because the bash function only exists within the bash shell where it is defined. They will work with scripts that implement the same function, same as they would with any other executable. If you really want to do this with a function, you could just run thetaskfunction in the background with&but you'd have to write your own queueing code to limit the number of simultaneous tasks.