116

I have a problem here. I have to print a column in a text file using awk. However, the columns are not separated by spaces at all, only using a single comma. Looks something like this:

column1,column2,column3,column4,column5,column6

How would I print out 3rd column using awk?

2

4 Answers 4

190

Try:

awk -F',' '{print $3}' myfile.txt

Here in -F you are saying to awk that use , as the field separator.

Sign up to request clarification or add additional context in comments.

1 Comment

I have been browsing through a lot of pages and got a lot more of results and this was by far the best :) Thank you
49

If your only requirement is to print the third field of every line, with each field delimited by a comma, you can use cut:

cut -d, -f3 file
  • -d, sets the delimiter to a comma
  • -f3 specifies that only the third field is to be printed

1 Comment

This is the best answer for this question. awk comes in very handy when lets say I want to print [col1]:[col5] with different dels and a different formatting
30

Try this awk

awk -F, '{$0=$3}1' file
column3
  • , Divide fields by ,
  • $0=$3 Set the line to only field 3
  • 1 Print all out. (explained here)

This could also be used:

awk -F, '{print $3}' file

4 Comments

As well as being shorter, this is much more difficult for someone unfamiliar with awk to understand. It would be worth adding some explanation to make this answer more useful.
+1. A little bit cryptic, but works like a Schaffhausen.
@TomFenech: I think cut -d, -f3 file is as cryptic as this one if someone is unfamiliar with cut. ;)
@TrueY granted, although one difference is that cut --help would explain everything that you needed to know, whereas awk --help wouldn't. Perhaps I should've gone for cut --delimiter=, --fields=3 file, although I have my doubts that the longer switches are portable :)
3

A simple, although -less solution in :

while IFS=, read -r a a a b; do echo "$a"; done <inputfile

It works faster for small files (<100 lines) then as it uses less resources (avoids calling the expensive fork and execve system calls).

EDIT from Ed Morton (sorry for hi-jacking the answer, I don't know if there's a better way to address this):

To put to rest the myth that shell will run faster than awk for small files:

$ wc -l file
99 file

$ time while IFS=, read -r a a a b; do echo "$a"; done <file >/dev/null

real    0m0.016s
user    0m0.000s
sys     0m0.015s

$ time awk -F, '{print $3}' file >/dev/null

real    0m0.016s
user    0m0.000s
sys     0m0.015s

I expect if you get a REALY small enough file then you will see the shell script run in a fraction of a blink of an eye faster than the awk script but who cares?

And if you don't believe that it's harder to write robust shell scripts than awk scripts, look at this bug in the shell script you posted:

$ cat file
a,b,-e,d
$ cut -d, -f3 file
-e
$ awk -F, '{print $3}' file
-e
$ while IFS=, read -r a a a b; do echo "$a"; done <file

$

12 Comments

While read loops are singificantly slower than awk, even if it were quicker with tiny files, the speed difference would be neglible.
@Jidder: You are right! IMHO that's why it is pointless to use awk for small files.
@EdMorton: You are right. For bigger and more complex problems I definitely use awk (more often called from a bash script), but for small files and simple tasks I use pure bash. On the other hand for even more complex jobs I prefer perl for scripting.
@EdMorton: You are right again and it seems that with echo it cannot be solved easily. In that case one can use the printf "%s\n" "$a" to get rid of that. This also a bash build-in.
@EdMorton: You are right! If someone uses any tool he/she needs to know the possible problems. So I suggest to anyone to use an adequate tool. If I write a kernel module I definitely shell not use bash. ;) But for such an itsy-bitsy task it may be enough.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.