Sorry(Sorry to spam you all with another answer..) For many situations, the elegant awk versions presented are perfect. But there is life outside one-liners -- we often need more:
- add extra code to cope with complex csv files;
- add extra steps for normalization, reformatting, processing.
In the following an answer thatskeleton, we use a Parser of CSV files. This time I avoidedwe are avoiding one-ligners and even strictly declare the variables!
#!/usr/bin/perl
use strict;
use Parse::CSV;
my %dict=();
my $c = Parse::CSV->new(file => 'a1.csv');
while ( my $row = $c->fetch ) { ## for all records
$dict{$row->[0]} .= join(" :: ",@$row)."\n"; ## process and save
}
for my $k (keys %dict){ ## create the cvs files
open(F,">","$k.cvs") or die;
print F $dict{$k};
close F;
}
- ThaThe main advantage is that we can deal with more complex csv files; this time the cvscsv input can have strings with ";", can include multiline fields (csv specification is complex!):
1111,2,3
"3,3,3",a,"b, c, and d"
"a more, complex
multiline record",3,4
- as we used a dic cache this script runs 100 times faster than my other solution.
- to exemplify a processing step, the field separator was changed to " :: "
- to exemplify extra steps we added some optimization: as we used a dict cache, this script runs 100 times faster than my other solution.