Skip to main content
added 25 characters in body
Source Link
jubilatious1
  • 3.9k
  • 10
  • 20

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array, keeping correct column order:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as a sorted $index. Note adding :i to the Regex (to make /:i chronic/) enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array, keeping correct column order:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as a sorted $index. Note adding :i to the Regex enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array, keeping correct column order:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as a sorted $index. Note adding :i to the Regex (to make /:i chronic/) enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

added 8 characters in body
Source Link
jubilatious1
  • 3.9k
  • 10
  • 20

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array, keeping correct column order:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as ana sorted $index. Note adding :i to the Regex enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index]@data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as an $index. Note adding :i to the Regex enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array, keeping correct column order:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as a sorted $index. Note adding :i to the Regex enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org

Source Link
jubilatious1
  • 3.9k
  • 10
  • 20

Using Raku (formerly known as Perl_6)

...this time, using Raku's CSV::Parser module:

~$ raku -MCSV::Parser -e '  \

    my $fh = open "chronic_test.csv", :r;
    my $parser = CSV::Parser.new( file_handle => $fh, contains_header_row => False );

    #declare data structures and iterate over lines:
    my @data; my %header; my $index; my Int $i = 0;
    until $fh.eof { $_ = $parser.get_line();

    #read first line into %-sigiled hash, filter for `chronic`, and store as sorted $index:
    if $i++ == 0 { %header .= push: $_.pairs;
    $index = do for %header.kv -> $k,$v { $k if $v.grep: /chronic/ };
    $index .= sort() };

    #read all lines into @-sigiled array:
    @data .= push: $_.pairs.sort({.key.Int})>>.values ;
    }

    #use @data>>.[$index] idiom to filter for desired columns, and output:
    .join(",").put for @data>>.[$index];
    $fh.close;  

The Raku code above uses Raku' CSV::Parser module, which appears to be entirely hash-based. This may be more efficient for some purposes, however care has to be taken to return columns in their original order.

Briefly, a filehandle is opened and a new $parser object is created, telling Raku via the parameter contains_header_row => False that we'll handle the header ourselves.

The first line is read into %header which gets filtered for the desired /chronic/ Regex, and these keys (i.e. column numbers) are stored as an $index. Note adding :i to the Regex enables case-insensitive search.

All lines are then pushed onto the @data array, taking care that columns aren't scrambled. Finally, the array is filtered for the desired columns using the @data>>.[$index] idiom, and output.

Sample Input:

gender,chronic_disease1,chronic_disease2
male,2008,2009

Sample Output:

chronic_disease1,chronic_disease2
2008,2009

https://raku.land/zef:tony-o/CSV::Parser
https://github.com/tony-o/perl6-csv-parser
https://docs.raku.org
https://raku.org