Como remover itens duplicados na mesma linha de um arquivo csv?

0

Eu tenho um arquivo csv com ~ 4000 linhas, cada uma contendo entre 2 e 30 nomes separados por vírgulas. Os nomes estão incluindo títulos (por exemplo, Mr. Adams ou ms. Y Sanders). Alguns nomes existem várias vezes na mesma linha e eu gostaria de ter os múltiplos dentro da mesma linha removidos. Ele está em um arquivo "input.csv" e outro arquivo "output.csv" deve ser o resultado final.

Exemplo, eu tenho:

mr. 1,mr. 2,mr. 3,mr. 1,mr. 4
prof. x,prof. y,prof. x
mr. 1,prof y

que deve se tornar

mr. 1,mr. 2,mr. 3,mr. 4   (mr. 1 was already meantioned so it should be removed)
prof. x,prof. y           (prof. x was already mentioned so it should be removed)
mr. 1,prof y              (even though both were already mentioned in the same file, they were not mentioned within this line so they may remain)
    
por Jeff Schaller 08.10.2018 / 13:23

2 respostas

0

você pode tentar:

#!/bin/bash

cat file | while IFS= read -r line ; do 
echo "$line" | tr , '\n' | sort -u | tr '\n' , | sed 's/,$/\n/' ; 
done 
    
por 08.10.2018 / 13:30
0

Este não é um trabalho para bash , mas para ferramentas mais adequadas para processar dados tão grandes como awk .

Primeiro, deixe-me criar um arquivo de amostra chamado mycsv:

head -n2 mycsv
1field1,1field2,1field3,1field4,1field5,1field6,1field7,1field8,1field9,1field10,1field11,1field12,1field13,1field14,1field15,1field16,1field17,1field18,1field19,1field20,1field5,1field12
2field1,2field2,2field3,2field4,2field5,2field6,2field7,2field8,2field9,2field10,2field11,2field12,2field13,2field14,2field15,2field16,2field17,2field18,2field19,2field20,2field5,2field12

tail -4 mycsv
3997field1,3997field2,3997field3,3997field4,3997field5,3997field6,3997field7,3997field8,3997field9,3997field10,3997field11,3997field12,3997field13,3997field14,3997field15,3997field16,3997field17,3997field18,3997field19,3997field20,3997field5,3997field12,3997field21
3998field1,3998field2,3998field3,3998field4,3998field5,3998field6,3998field7,3998field8,3998field9,3998field10,3998field11,3998field12,3998field13,3998field14,3998field15,3998field16,3998field17,3998field18,3998field19,3998field20,3998field5,3998field12
3999field1,3999field2,3999field3,3999field4,3999field5,3999field6,3999field7,3999field8,3999field9,3999field10,3999field11,3999field12,3999field13,3999field14,3999field15,3999field16,3999field17,3999field18,3999field19,3999field20,3999field5,3999field12
4000field1,4000field2,4000field3,4000field4,4000field5,4000field6,4000field7,4000field8,4000field9,4000field10,4000field11,4000field12,4000field13,4000field14,4000field15,4000field16,4000field17,4000field18,4000field19,4000field20,4000field5,4000field12

Agora vamos conferir esta awk solution:

time awk -v RS=",|\n" '{if (!s[$0]++) {printf("%s%s",$0,(RT=="\n"?"\n":","))} else {printf("%s",(RT=="\n"?"\n":""))}}' mycsv |sed 's/,$//g' >mycsv2

real    0m0.131s
user    0m0.121s
sys 0m0.024s


head -2 mycsv2
1field1,1field2,1field3,1field4,1field5,1field6,1field7,1field8,1field9,1field10,1field11,1field12,1field13,1field14,1field15,1field16,1field17,1field18,1field19,1field20
2field1,2field2,2field3,2field4,2field5,2field6,2field7,2field8,2field9,2field10,2field11,2field12,2field13,2field14,2field15,2field16,2field17,2field18,2field19,2field20

tail -4 mycsv2
3997field1,3997field2,3997field3,3997field4,3997field5,3997field6,3997field7,3997field8,3997field9,3997field10,3997field11,3997field12,3997field13,3997field14,3997field15,3997field16,3997field17,3997field18,3997field19,3997field20,3997field21
3998field1,3998field2,3998field3,3998field4,3998field5,3998field6,3998field7,3998field8,3998field9,3998field10,3998field11,3998field12,3998field13,3998field14,3998field15,3998field16,3998field17,3998field18,3998field19,3998field20
3999field1,3999field2,3999field3,3999field4,3999field5,3999field6,3999field7,3999field8,3999field9,3999field10,3999field11,3999field12,3999field13,3999field14,3999field15,3999field16,3999field17,3999field18,3999field19,3999field20
4000field1,4000field2,4000field3,4000field4,4000field5,4000field6,4000field7,4000field8,4000field9,4000field10,4000field11,4000field12,4000field13,4000field14,4000field15,4000field16,4000field17,4000field18,4000field19,4000field20

Vamos também tentar a bash solution:

time cat mycsv | while IFS= read -r line ; do  echo "$line" | tr , '\n' | sort -u | tr '\n' , | sed 's/,$/\n/' ;  done >mycsv3

real    0m27.880s
user    0m28.385s
sys 0m17.863s

head -2 mycsv3
1field1,1field10,1field11,1field12,1field13,1field14,1field15,1field16,1field17,1field18,1field19,1field2,1field20,1field3,1field4,1field5,1field6,1field7,1field8,1field9
2field1,2field10,2field11,2field12,2field13,2field14,2field15,2field16,2field17,2field18,2field19,2field2,2field20,2field3,2field4,2field5,2field6,2field7,2field8,2field9

tail -4 mycsv3
3997field1,3997field10,3997field11,3997field12,3997field13,3997field14,3997field15,3997field16,3997field17,3997field18,3997field19,3997field2,3997field20,3997field21,3997field3,3997field4,3997field5,3997field6,3997field7,3997field8,3997field9
3998field1,3998field10,3998field11,3998field12,3998field13,3998field14,3998field15,3998field16,3998field17,3998field18,3998field19,3998field2,3998field20,3998field3,3998field4,3998field5,3998field6,3998field7,3998field8,3998field9
3999field1,3999field10,3999field11,3999field12,3999field13,3999field14,3999field15,3999field16,3999field17,3999field18,3999field19,3999field2,3999field20,3999field3,3999field4,3999field5,3999field6,3999field7,3999field8,3999field9
4000field1,4000field10,4000field11,4000field12,4000field13,4000field14,4000field15,4000field16,4000field17,4000field18,4000field19,4000field2,4000field20,4000field3,4000field4,4000field5,4000field6,4000field7,4000field8,4000field9

Aparentemente, a solução bash é 30 vezes mais lenta, e também é problemático devido ao uso de sort -u , que atrapalha os campos nos números de casos presentes nos campos.

    
por 08.10.2018 / 18:25