Skip to main content
9 of 16
added 109 characters in body
acgbox
  • 1k
  • 5
  • 15
  • 40

Launching Multiple Queries With Bash Script

I have this script (based on this post) to check the http-status of domains list (000, 418, 404 etc) and i would like to improve:

#!/bin/bash
# change path to your url list

yourlist=$(pwd)/url-list.txt

advance=$(cat advance.txt 2>/dev/null || echo 1)
while read LINE; do
  curl -o /dev/null --silent --head --write-out '%{http_code}' "$LINE"
  echo " $LINE"
  advance=$(($advance+1))
  echo $advance > advance.txt
done < <(tail -n +$advance $yourlist) >> out

infile:

amazon.com
google.com

outfile:

301 google.com
301 amazon.com

If the script is interrupted for any reason, it starts from the last processed line.

Problem: It is very slow since it verifies line by line.

Tests: I have already tried other alternatives, such as fping (very limited given the size of the list), pyfunceble (freezes PC), wget (doesn't do what i expect), GNU parallel, etc etc. None has convinced me.

Without test: A user recommended this answer based on 'xarg -P' (maybe: cat url-list.txt | xargs -P 3 -L 1 -n 1 bash -c 'content_of_script' or cat url-list.txt | xargs -P 3 -L 1 -n 1 ./script.sh), but I don't know how to implement it because I don't have knowledge about xarg. I also found another similar answer in bash, but it hasn't worked so far.

Question: How can I launch multiple queries (parallel processing) with my script so that I could process many lines at the same time (if it would be possible to set the number of lines to be processed manually, avoiding freeze or blocking the script or PC)?

acgbox
  • 1k
  • 5
  • 15
  • 40