Skip to main content
added 49 characters in body
Source Link

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

UPDATE:

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info.phy.rooting.0.rearrange.0.info

The .info files contains a line that states: Duplications:2

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is:

grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

UPDATE:

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is:

grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

The .info files contains a line that states: Duplications:2

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is:

grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

added 6 characters in body
Source Link
PersianGulf
  • 11.3k
  • 11
  • 56
  • 83

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

UPDATE:

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info.phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is: grep -w Duplications: *.info| grep -v Conditional >dups

grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is: grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

UPDATE:

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is:

grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

added 488 characters in body
Source Link

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is: grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

I had a question regarding the awk command. I want to take a bunch of files and find a single line from each and extract them into a comma seperated text file such that I can import it into excel for graphing purposes. It worries me however because the program I use outputs .info files and I have heard that awk only works with text files. Is grep the best option? If so how can I make it such that the output is comma seperated?

The files outputted from the program are ended with .phy.rooting.0.rearrange.0.info

These are where I get the information I have to remove.

This command works currently, but I was hoping for a more updated one and also possible the challenge of changing the code for learning, if that makes sense.

The code that works is: grep -w Duplications: *.info| grep -v Conditional >dups

However, I kind of want to see If i can make an Awk code that could do the same thing.

added 6 characters in body; edited tags
Source Link
PersianGulf
  • 11.3k
  • 11
  • 56
  • 83
Loading
Source Link
Loading