Alterando a coluna de vários arquivos csv

2

Eu tenho um arquivo csv chamado scenario1.csv, no qual a segunda coluna tem nomes de coluna como "0-4 anos de alto risco", "65+ anos de resposta" etc. Há 20 desses valores. A 21ª linha, 2ª coluna, tem a mesma entrada que na 1ª linha da 2ª coluna. Eu gostaria de renomear esses valores para p1-p20, respectivamente. Então a 21ª linha teria a tag p1 com ela. Tudo sem citações. Eu tenho uns 150 arquivos, chamado scenario1.csv, scenario2.csv ... Como faço isso? Abaixo está um exemplo para um arquivo mais curto:

t,  group,               1,  3,  5
0,  0-4 years low risk,  0,  0,  0
0,  0-4 years high risk, 0,  0,  1
.....,  .... 
0,  0-4 years low risk, 0,  0,   0

Saída esperada para cada arquivo:

   t,  group,  1,  3,  5
   0,  p1,     0,  0,  0
   0,  p2,     0,  0,  0
    .....,  .... 
   0,  p1,     0,  0,  0 

Aqui está o dicionário que eu preciso:

0-4 years first responder   p1
0-4 years high risk         p2
.......
65+ years low risk          p19
65+ years pregnant women    p20
    
por bissi 28.10.2015 / 21:01

4 respostas

2

Como você não tem o GNU AWK nem sponge instalado:

<<<"$(<treatables-000.csv)" awk -F ',' -v OFS=',' 'NR!=1{$2="p"(NR-2)%20+1}1' >treatables-000.csv
  • -F ',' : define o separador do campo de entrada como , ;
  • -v OFS=',' : define o separador do campo de saída como , ;
  • NR!=1{$2="p"(NR-2)%20+1}1 : se o número do registro atual for maior que 1 , define o segundo campo como uma sequência feita de um caractere p seguido pelo resultado da expressão (NR-2)%20+1 e imprime o registro;
% cat treatables-000.csv 
t,group,1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191,193,195,197,199,201,203,205,207,209,211,213,215,217,219,221,223,225,227,229,231,233,235,237,239,241,243,245,247,249,251,253,255,257,259,261,263,265,267,269,271,273,275,277,279,281,283,285,287,289,291,293,295,297,299,301,303,305,307,309,311,313,315,317,319,321,323,325,327,329,331,333,335,337,339,341,343,345,347,349,351,353,355,357,359,361,363,365,367,369,371,373,375,377,379,381,383,385,387,389,391,393,395,397,399,401,403,405,407,409,411,413,415,417,419,421,423,425,427,429,431,433,435,437,439,441,443,445,447,449,451,453,455,457,459,461,463,465,467,469,471,473,475,477,479,481,483,485,487,489,491,493,495,497,499,501,503,505,507
0,0-4 years low risk,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0-4 years high risk,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
% <<<"$(<treatables-000.csv)" awk -F ',' -v OFS=',' 'NR!=1{$2="p"(NR-2)%20+1}1' >treatables-000.csv
% cat treatables-000.csv
t,group,1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191,193,195,197,199,201,203,205,207,209,211,213,215,217,219,221,223,225,227,229,231,233,235,237,239,241,243,245,247,249,251,253,255,257,259,261,263,265,267,269,271,273,275,277,279,281,283,285,287,289,291,293,295,297,299,301,303,305,307,309,311,313,315,317,319,321,323,325,327,329,331,333,335,337,339,341,343,345,347,349,351,353,355,357,359,361,363,365,367,369,371,373,375,377,379,381,383,385,387,389,391,393,395,397,399,401,403,405,407,409,411,413,415,417,419,421,423,425,427,429,431,433,435,437,439,441,443,445,447,449,451,453,455,457,459,461,463,465,467,469,471,473,475,477,479,481,483,485,487,489,491,493,495,497,499,501,503,505,507
0,p1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,p2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

Para repetir isso para todos os arquivos correspondentes ao padrão de globbing treatables-???.csv no diretório de trabalho atual, você pode usar um loop Bash for :

for f in treatables-???.csv; do <<<"$(<"$f")" awk -F ',' -v OFS=',' 'NR!=1{$2="p"(NR-2)%20+1}1' >"$f"; done
    
por 29.10.2015 / 02:32
0

Você pode executar a tarefa com o loop, nl ( n umbering l ine) e sed ( s tring e ditor)

for f in scenario*.csv
do
    #next will numerate all lines exept first (started without number) 
    nl -bp^[0-9] -nln -w1 "$f" | 
    sed '
        #add the «p» before line number 
        s/^[0-9]/p&/
        #put «pNUM» on the place of second field started with «NUM-NUM»
        s/\(^p[0-9]*\)\s*\([0-9]*,\s*\)[0-9]-[0-9][^,]*//
        #removes spaces from the line begining (may be for header only)
        s/^\s*//
        ' > out.tmp #outputs changed lines into temporary file
    mv out.tmp "$f" #move temp file to original 
done
rm out.tmp #delete temp file
    
por 28.10.2015 / 21:40
0

Eu entendo que você tem uma lista de frases únicas e deseja substituir a primeira frase da lista por "p1", a segunda por "p2" e assim por diante. Você pode fazer isso da seguinte maneira se quiser preservar a largura da coluna:

for filename in *.csv; do
    awk '
        BEGIN {
            FS = ","
            n = 0
        }
        {   
            if (NR > 1) {
                if (!($2 in p)) {
                    n++
                    p[$2] = n
                }
                $2 = "p" p[$2]
            }
            for (i = 1; i <= NF; i++) {
                sub("^[ ]+", "", $i)
                if (i != NF) {
                    $i = $i ","
                }
            }
            # Add more columns and adjust the column widths to
            # your liking here.
            printf "%-3s%-10s%-3s%-3s%-3s\n", $1, $2, $3, $4, $5
        }
    ' "$filename" > "$filename.tmp"
    mv "$filename.tmp" "$filename"
done
    
por 28.10.2015 / 21:58
0

Aqui está um script perl que faz o trabalho.

Você pode adicionar mais padrões e substituições ao hash %patterns , conforme necessário. Não esqueça a vírgula no final de cada linha.

Observe que os padrões são interpretados como expressões regulares, não como sequências literais. Portanto, se seus padrões tiverem algum caractere especial regexp (como * , ( , ) , ? , + , etc), eles precisarão de escape com \ (por exemplo, \* , \( , \) , \? , \+ ).

O script altera um pouco a saída, pois une todos os campos com ,\t (vírgula e uma única guia) em que sua entrada original tinha vários espaços. Se isso for significativo, você pode ajustar essa declaração de impressão para produzir a mesma saída ou semelhante (por exemplo, usando printf em vez de print join() )

$ cat bissi.pl 
#! /usr/bin/perl

use strict;

# optimisation: use qr// for the search patterns so that
# the hash keys are pre-compiled regular expressions.
# this makes the for loop later MUCH faster if there are
# lots of patterns and lots of input lines to process. 
my %patterns = (
    qr/0-4 years low risk/        => 'p1',
    qr/0-4 years high risk/       => 'p2',

    qr/65\+ years low risk/       => 'p19',
    qr/65\+ years pregnant women/ => 'p20',
);


while(<>) { 
    chomp;
    my @line = split /,\s*/;
    foreach my $key (keys %patterns) {
        # perl arrays are zero based, so $line[1] is 2nd field
        if ($line[1] =~ m/$key/) {
            $line[1] = $patterns{$key} ;
            last;
        }
    } 
    print join(",\t",@line), "\n";
}

Isso produz a seguinte saída:

$ ./bissi.pl input.txt 
t,  group,  1,  3,  5
0,  p1, 0,  0,  1
0,  p2, 0,  0,  0
0,  p1, 0,  0,  0

Para converter todos os 150 de seus arquivos, você poderia colocar isso em um shell para loop, algo assim:

mkdir -p new
for i in {1..150} ; do
    ./bissi.pl "scenario$i.csv" > "new/scenario$i.csv"
done
    
por 29.10.2015 / 00:39

Tags