Assuming I've interpreted it correctly, you could also do something like:
awk (POSIX):
awk -v n_col=4 '
NF != n_col { next }
FILENAME != file {
file = FILENAME
k = 0
}
{
for (i = 1; i <= n_col; ++i)
A[k++] += $i
}
END {
n_files = ARGC - 1
for (i = 0; i < k; ) {
printf "%2.3f%s", A[i] / n_files,
++i % n_col == 0 ? "\n" : " "
}
}
' data*.txt
perl:
I am sure this can be done better, but a stab at it.
./script.pl <COLUMNS> data*.txt
#!/usr/bin/env perl
use strict;
use warnings;
my @data;
my $cols = $ARGV[0];
my $ac = $#ARGV;
shift;
for (@ARGV) {
my $k = 0;
open my $fh, '<', $_
or die "Cannot open '$_' - $!";
local $/;
my $fdata = <$fh>;
close $fh;
for ($fdata) {
$data[$k++] += $_ for split;
}
}
my $i = 0;
for (@data) {
printf "%.3f%s", $_ / $ac, ++$i % $cols ? "\t" : "\n";
}
bash: (Slow)
As you tagged the question with bash as well I add a, for fun, sample of the same. Rather slow compared to perl, awk, ...
Note that while it is doable, it is not the best tool for the job.
Uses bashism in for of mapfile, read -a etc.
./script <COLUMNS> data*.txt
#!/bin/bash
declare -i res=1000
declare -i dec=$(( ${#res} - 1 ))
declare -i n_files
declare -i n_columns
declare -a A
process()
{
local m a
mapfile -t m< "$1"
read -ra a<<< "${m[@]}"
for (( i = 0; i < ${#a[*]}; ++i )); do
(( A[i] += a[i] ))
done
(( ++n_files ))
}
n_columns=$1
shift
for f in "$@"; do
process "$f"
done
for (( i = 0; i < ${#A[@]}; ++i )); do
(( (i + 1) % n_columns == 0 )) && sep=$'\n' || sep=' '
printf "%3.${dec}f%s" "$(( res * A[i] / n_files ))e-$dec" "$sep"
done
alternative method for printing "float":
(( d = A[i] * res / n_files ))
printf "%3d.%0${dec}d%s" "$(( d / res ))" "$(( d % res ))" "$sep"
Next sed ... nah, believe I only link this: Addition with 'sed' ;)
0.75 1.25 3 3.25in row 1 etc. Is this correct?Col1.Row1+file2:Col1.Row1... +file120Col1.Row1? 120 values. And file1:Col2.Row1+file2:Col2.Row1... +file120Col2.Row1... Then file1:Col1.Row2+file2:Col1.Row2... +file120Col1.Row2?