I have the following bash script (on this post):
#!/bin/bash
while read LINE; do
curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < infile > outfile
infile:
google.com
facebook.com
outfile:
301 amazon.com
302 facebook.com
Problem: It is very slow since it verifies line by line.
Tests: I have already tried other alternatives, such as fping (very limited given the size of the list), pyfunceble (freezes), wget, GNU parallel, etc, etc. None has convinced me.
Question: How can I launch multiple queries (parallel processing) with this script so that I could process many lines at the same time (if it would be possible to set the number of lines to be processed manually, avoiding freeze or blocking the script or PC)?