0

I currently have a directory of files that looks something like this:

abcd.txt
abcd_.txt
qrst.txt
qrst_.txt
wxyz.txt
wxyz_.txt

In theory, every line in abcde_.txt should be contained in abcd.txt, and every line in qrst_.txt should be contained in qrst.txt, and so on. While I have no problem comparing two files to test for this individually, I'm trying to find a more efficient way to do this for the entire directory.

In a case like this, if I had a lot of pairs of files, but I didn't know in advance what string of letters they'd each start with, is there a way to loop through and process each set of two related files at a time?

2 Answers 2

2

From the list of name, you could find *_.txt. From the name, remove _.txt and add .txt. Compare the two files.

for f1 in *_.txt; do
    f2="${f1%_.txt}.txt"
    compare "$f1" "$f2"
done
0

If you already know that for every file file.txt, there is a file file_.txt, I will use the _ to differentiate between them

A code like this should work:

files=(`ls | grep -v "_"`)
for f1 in ${files[@]}
do
     f2=`echo $f1 | sed -r 's/.txt/_.txt/g'`
     diff $f1 $f2
done

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.