I have multiple files and I want to read them simultaneously, extract a number from each row and do the averages. For a small number of files I did this using izip in the itertools module. Here is my code.
from itertools import izip
import math
g=open("MSDpara_ave_nvt.dat",'w')
with open("sample1/err_msdCECfortran_nvt.dat",'r') as f1, \
open("sample2/err_msdCECfortran_nvt.dat",'r') as f2, \
open("sample3/err_msdCECfortran_nvt.dat",'r') as f3, \
open("err_msdCECfortran_nvt.dat",'r') as f4:
for x,y,z,bg in izip(f1,f2,f3,f4):
args1=x.split()
i1 = float(args1[0])
msd1 = float(args1[1])
args2=y.split()
i2 = float(args2[0])
msd2 = float(args2[1])
args3=z.split()
i3 = float(args3[0])
msd3 = float(args3[1])
args4=bg.split()
i4 = float(args4[0])
msd4 = float(args4[1])
msdave = (msd1 + msd2 + msd3 + msd4)/4.0
print>>g, "%e %e" %(i1, msdave)
f1.close()
f2.close()
f3.close()
f4.close()
g.close()
This code works OK. But if I want to handle 100 files simultaneously, the code becomes very lengthy if I do it in this way. Are there any other simpler ways of doing this? It seems that fileinput module can also handle multiple files, but I don't know if it can do it simultaneously.
Thanks.
withstatement.