Skip to main content
added 376 characters in body
Source Link
JJoao
  • 12.8k
  • 1
  • 26
  • 45

Sorry(Sorry to spam you all with another answer..) For many situations, the elegant awk versions presented are perfect. But there is life outside one-liners -- we often need more:

  • add extra code to cope with complex csv files;
  • add extra steps for normalization, reformatting, processing.

In the following an answer thatskeleton, we use a Parser of CSV files. This time I avoidedwe are avoiding one-ligners and even strictly declare the variables!

#!/usr/bin/perl

use strict;
use Parse::CSV;
my %dict=();

my $c = Parse::CSV->new(file => 'a1.csv');

while ( my $row = $c->fetch ) {                    ## for all records
   $dict{$row->[0]} .=   join(" :: ",@$row)."\n";  ## process and save
}

for my $k (keys %dict){                            ## create the cvs files
   open(F,">","$k.cvs") or die;
   print F $dict{$k};
   close F;
}
  • ThaThe main advantage is that we can deal with more complex csv files; this time the cvscsv input can have strings with ";", can include multiline fields (csv specification is complex!):
 1111,2,3
 "3,3,3",a,"b, c, and d"
 "a more, complex
        multiline record",3,4
  • as we used a dic cache this script runs 100 times faster than my other solution.
  • to exemplify a processing step, the field separator was changed to " :: "
  • to exemplify extra steps we added some optimization: as we used a dict cache, this script runs 100 times faster than my other solution.

Sorry to spam you with another answer...

In the following an answer that use a Parser of CSV files. This time I avoided one-ligners and even declare variables!

#!/usr/bin/perl

use strict;
use Parse::CSV;
my %dict=();

my $c = Parse::CSV->new(file => 'a1.csv');

while ( my $row = $c->fetch ) {                    ## for all records
   $dict{$row->[0]} .=   join(" :: ",@$row)."\n";  ## process and save
}

for my $k (keys %dict){                            ## create the cvs files
   open(F,">","$k.cvs") or die;
   print F $dict{$k};
   close F;
}
  • Tha main advantage is that we can deal with more complex csv files; this time the cvs input can have strings with ";", can include multiline fields:
 1111,2,3
 "3,3,3","a more, complex
        multiline record",3,4
  • as we used a dic cache this script runs 100 times faster than my other solution.
  • to exemplify a processing step, the field separator was changed to " :: "

(Sorry to spam you all with another answer) For many situations, the elegant awk versions presented are perfect. But there is life outside one-liners -- we often need more:

  • add extra code to cope with complex csv files;
  • add extra steps for normalization, reformatting, processing.

In the following skeleton, we use a Parser of CSV files. This time we are avoiding one-ligners and even strictly declare the variables!

#!/usr/bin/perl

use strict;
use Parse::CSV;
my %dict=();

my $c = Parse::CSV->new(file => 'a1.csv');

while ( my $row = $c->fetch ) {                    ## for all records
   $dict{$row->[0]} .=   join(" :: ",@$row)."\n";  ## process and save
}

for my $k (keys %dict){                            ## create the cvs files
   open(F,">","$k.cvs") or die;
   print F $dict{$k};
   close F;
}
  • The main advantage is that we can deal with more complex csv files; this time the csv input can have strings with ";", can include multiline fields (csv specification is complex!):
 1111,2,3
 "3,3,3",a,"b, c, and d"
 "a more, complex
        multiline record",3,4
  • to exemplify a processing step, the field separator was changed to " :: "
  • to exemplify extra steps we added some optimization: as we used a dict cache, this script runs 100 times faster than my other solution.
Source Link
JJoao
  • 12.8k
  • 1
  • 26
  • 45

Sorry to spam you with another answer...

In the following an answer that use a Parser of CSV files. This time I avoided one-ligners and even declare variables!

#!/usr/bin/perl

use strict;
use Parse::CSV;
my %dict=();

my $c = Parse::CSV->new(file => 'a1.csv');

while ( my $row = $c->fetch ) {                    ## for all records
   $dict{$row->[0]} .=   join(" :: ",@$row)."\n";  ## process and save
}

for my $k (keys %dict){                            ## create the cvs files
   open(F,">","$k.cvs") or die;
   print F $dict{$k};
   close F;
}
  • Tha main advantage is that we can deal with more complex csv files; this time the cvs input can have strings with ";", can include multiline fields:
 1111,2,3
 "3,3,3","a more, complex
        multiline record",3,4
  • as we used a dic cache this script runs 100 times faster than my other solution.
  • to exemplify a processing step, the field separator was changed to " :: "