Skip to main content
added 23 characters in body
Source Link

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequencyfrequency of words for 1 file but I am unable to figure out how I will deal when I have multiple files and process them parallelly.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

 

Any help in right direction is greatly appreciated. :)

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words but I am unable to figure out how I will deal when I have multiple files.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

 

Any help in right direction is greatly appreciated. :)

I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words for 1 file but I am unable to figure out how I will deal when I have multiple files and process them parallelly.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

 

Any help in right direction is greatly appreciated. :)

deleted 318 characters in body
Source Link

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words but I am unable to figure out how I will deal when I have multiple files.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

       ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who
===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

Any help in right direction is greatly appreciated. :)

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words but I am unable to figure out how I will deal when I have multiple files.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

       ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who
===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

Any help in right direction is greatly appreciated. :)

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words but I am unable to figure out how I will deal when I have multiple files.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

 

Any help in right direction is greatly appreciated. :)

Source Link

how to write a bash script that takes multiple files?

I am new here. I am working on writing a bash script which takes multiple files as input and display the top ‘n’ most frequently occurring words in descending order for each of file.

I figured out how to count the frequency of words but I am unable to figure out how I will deal when I have multiple files.

 sed -e 's/[^[:alpha:]]/ /g' testfile.txt | tr '\n' " " |  tr -s " " | tr " " '\n'| tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl 

this works fine for 1 file but I want to write a bash script which I can run like following:

$countWord test1.txt test2.txt test3.txt (countword here is my bash script that counts freq)

I want it to take those files as input and for each file show me something like:

   ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

       ===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who
===(1 51 33 test1.txt)====    # where 1: number of lines, 51: number of words, 33: number of characters
38 them
29 these
17 to
12 who

Any help in right direction is greatly appreciated. :)