Using Raku (formerly known as Perl_6)
raku -MText::CSV -e 'my @a = csv(in => "/Path/To/File"); \
my @b = @a.map(*.[1]) Z- @a.map(*.[2]); \
.put for @a Z @b;'
#OR
raku -MText::CSV -e 'my @a = csv(in => "/Path/To/File"); \
.put for @a Z (@a.map(*.[1]) Z- @a.map(*.[2]));'
Sample Input:
"Afghanistan","94.0","81.1"
"Bahamas","42.9","43.2"
"Bolivia (Plurinational State of)","86.7","31.9"
"Brazil","76.7","0.0"
Sample Output:
Afghanistan 94.0 81.1 12.9
Bahamas 42.9 43.2 -0.3
Bolivia (Plurinational State of) 86.7 31.9 54.8
Brazil 76.7 0.0 76.7
While I'm sure you can 'get-away-with' parsing csv on your own (for small projects), it may make sense at some point for you to use a CSV parser (to handle embedded quotes, embedded newlines, detect/change comma/non-comma column separators, trim whitespace, etc.).
You might be surprised how easy it is to invoke a CSV parser at the bash command line in the Raku programming language. First load theText::CSV module with the -M (module) commandline flag, which gives you -MText::CSV. Then tell Raku to expect code to follow with the -e commandline flag.
To explain the first example, an @a array is declared with my and takes csv input. A @b array is declared with my that takes the difference between (zero-indexed) column 1 and column 2. This difference is accomplished with the Z zip metaoperator appended with the minus operator, to make Z-. Finally, the Z zip metaoperator is used to output the original @a csv, zipped row-by-row with the @b column.
https://github.com/Tux/CSV/blob/master/doc/Text-CSV.pdf (click the Download link)
https://docs.raku.org/language/operators#index-entry-Z_(zip_metaoperator)
https://raku.org
awk '{ print $2 }' filename.csvwould have shown you whatawkthought was column 2. You might then decide to tryawk '{ print $1 }' filename.csvand then you would probably need toman awkand look for something to do with a field separator