I'm working on a script that chews on some data it sucks in from a CSV file. I've already read the data into several arrays (one for each column in the file); I now need to actually work with all of the data in sequence.
Currently, I am doing this:
# Read in the data:
declare -a DATACOL1 DATACOL2 RAWDATA
RAWDATA=($( sed '1d' /path/to/data.csv )) # Remove the header line
for line in ${RAWDATA[@]}; do
declare -a LINEDATA LINE
LINE=$( echo "$line" | sed 's/,/ /g' )
for field in LINE; do
LINEDATA+=("${field}")
done
DATACOL1+=(${LINEDATA[0]})
DATACOL2+=(${LINEDATA[1]})
done
# Work on the data:
for i in $( seq 0 $[${#DATACOL1[@]}-1}; do
stuff and things with ${DATACOL1[i]} and ${DATACOL2[i]}
done
My (quite possibly interrelated) questions are twofold:
Is there a more elegant way to later work in the data than
for i in $( seq 0 $[${#DATACOL1[@]}-1}for iterating over them? It works, but is ugly.Is there a more elegant way to suck in the CSV data?
This is on bash 3, so I do not have associative arrays.