I have a big text file (roughly 2GB). I have a csv file that has the following fields:
rowID,pattern,other
1,abc_1z1,90
2,abc_1z2,90
3,abc_1z10,80
4,abc_3p1,77
...
My interest is: replace the content of the big file as follows. Whenever a string in the big file matches a "pattern" in my CSV (second field), it will replace that string by the corresponding "rowID" (first field).
This is what I have tried using sed, which is extremely slow (also due to in-place replacement of the file). But, is there any faster solution?
while read f1 f2 f3;
do
sed -i "s/$f2/$f1/g" bigfile;
done < map.csv
Note that map.csv contains over 500000 rows.
map.csvand expected output applied based on given given samplebig.txtfile