detectando linhas exclusivas do arquivo de log

0

Eu tenho um arquivo de log grande e gostaria de detectar os padrões em vez de linhas específicas.

por exemplo:

/path/messages-20181116:11/15/2018 14:23:05.159|worker001|clusterm|I|userx deleted job 5018
/path/messages-20181116:11/15/2018 14:41:25.662|worker001|clusterm|I|userx deleted job 4895
/path/messages-20181116:11/15/2018 14:41:25.673|worker000|clusterm|I|userx deleted job 4890
/path/messages-20181116:11/15/2018 14:41:25.681|worker000|clusterm|I|userx deleted job 4889
11/09/2018 06:18:55.115|scheduler000|clusterm|P|PROF: job profiling(low job) of 9473507.1 
11/09/2018 06:18:55.118|scheduler000|clusterm|P|PROF: job profiling(low job) of 9473507.1                
11/09/2018 06:18:55.120|scheduler000|clusterm|P|PROF: job profiling(low job) of 9473507.1                
11/09/2018 06:18:55.140|scheduler000|clusterm|P|PROF: job dispatching took 5.005 s (10 fast)             
11/09/2018 06:18:55.143|scheduler000|clusterm|P|PROF: dispatched 1 job(s)             
11/09/2018 06:18:55.143|scheduler000|clusterm|P|PROF: dispatched 5 job(s)             
11/09/2018 06:18:55.143|scheduler000|clusterm|P|PROF: dispatched 3 job(s)             
11/09/2018 06:18:55.145|scheduler000|clusterm|P|PROF: parallel matching   14  0438 107668                 
11/09/2018 06:18:55.148|scheduler000|clusterm|P|PROF: sequential matching  9  0261   8203               
11/09/2018 06:18:55.561|scheduler000|clusterm|P|PROF(1776285440): job sorting :wc =0.006s              
11/09/2018 06:18:55.564|scheduler000|clusterm|P|PROF(1776285440): job dispatching: wc=5.005              
11/09/2018 06:18:55.561|scheduler000|clusterm|P|PROF(1776285440): job sorting : wc=0.006s
11/09/2018 06:18:55.564|scheduler000|clusterm|P|PROF(1776285440): job dispatching: wc =0.015   

torna-se algo como abaixo:

/path/messages-*NUMBER*:*DATE* *TIME*|worker001|clusterm|I|userx deleted job *NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: job profiling(low job) of *NUMBER* 
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: job dispatching took *NUMBER* s (*NUMBER* fast)             
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: dispatched *NUMBER* job(s)             
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: parallel matching   *NUMBER*  *NUMBER* *NUMBER*                 
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: sequential matching  *NUMBER*  *NUMBER*   *NUMBER*               
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job sorting :wc =*NUMBER*s              
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job dispatching: wc=*NUMBER*    

que reduzem bastante o número de linhas e facilita a análise / leitura de log por olho.

basicamente detectando palavras variáveis e substituindo-as por algum símbolo.

    
por user772266 20.11.2018 / 13:06

1 resposta

0

Quão longe

sed -r 's~([0-9]{2}/){2}[0-9]{4}~*DATE*~g; s/[0-9:.]{12}/*TIME*/g; s/[0-9.]+/*NUMBER*/g; s/[   ]*$//; ' file4 | uniq 
/path/messages-*NUMBER*:*DATE* *TIME*|worker*NUMBER*|clusterm|I|userx deleted job *NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: job profiling(low job) of *NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: job dispatching took *NUMBER* s (*NUMBER* fast)
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: dispatched *NUMBER* job(s)
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: parallel matching   *NUMBER*  *NUMBER* *NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF: sequential matching  *NUMBER*  *NUMBER*   *NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job sorting :wc =*NUMBER*s
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job dispatching: wc=*NUMBER*
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job sorting : wc=*NUMBER*s
*DATE* *TIME*|scheduler*NUMBER*|clusterm|P|PROF(*NUMBER*): job dispatching: wc =*NUMBER*

você? Com alguma concentração, motivação, paciência e tempo, você poderia pular o pipe através de uniq em favor de uma solução sed completa ...

    
por 26.11.2018 / 16:20