Logstash: falha ao liberar itens de saída

1

Primeiro de tudo, eu sou um novato total para logstash. Apesar do fato de que eu consegui obter algum log básico (tentando analisar um arquivo de log do Apache sem o build em COMBINEDAPACHELOG). No entanto, eu corri preso no seguinte erro que recebe spam no meu terminal assim que o /var/log/auth.log recebe uma atualização (linhas são adicionadas). Logstash versão 1.4.2. Servidor Ubuntu OS 14.04

Failed to flush outgoing items {:outgoing_count=>1, :exception=>#, :backtrace=>["/opt/logstash-1.4.2/lib/logstash/outputs/elasticsearch/protocol.rb:225:in 'build_request'", "/opt/logstash-1.4.2/lib/logstash/outputs/elasticsearch/protocol.rb:205:in 'bulk'", "org/jruby/RubyArray.java:1613:in 'each'", "/opt/logstash-1.4.2/lib/logstash/outputs/elasticsearch/protocol.rb:204:in 'bulk'", "/opt/logstash-1.4.2/lib/logstash/outputs/elasticsearch.rb:315:in 'flush'", "/opt/logstash-1.4.2/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in 'buffer_flush'", "org/jruby/RubyHash.java:1339:in 'each'", "/opt/logstash-1.4.2/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in 'buffer_flush'", "/opt/logstash-1.4.2/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in 'buffer_receive'", "/opt/logstash-1.4.2/lib/logstash/outputs/elasticsearch.rb:311:in 'receive'", "/opt/logstash-1.4.2/lib/logstash/outputs/base.rb:86:in 'handle'", "(eval):130:in 'initialize'", "org/jruby/RubyProc.java:271:in 'call'", "/opt/logstash-1.4.2/lib/logstash/pipeline.rb:266:in 'output'", "/opt/logstash-1.4.2/lib/logstash/pipeline.rb:225:in 'outputworker'", "/opt/logstash-1.4.2/lib/logstash/pipeline.rb:152:in 'start_outputs'"], :level=>:warn}

Veja o que está acontecendo aqui, a única coisa que sei é que ele bloqueia o processo de registro de qualquer outro arquivo de log.

Qualquer ideia sobre o que estou fazendo errado ou causa isso. Além disso, tenho mais algumas perguntas, estas estão no final.

informações adicionais:

o arquivo de configuração

input {
    #apache
    file {
        type => "apache-access"
        path => "/var/log/apache2/access.log"
    }
    file {
        type => "apache-error"
        path => "/var/log/apache2/error.log"
    }

    #linux
    file {
        type => "authentication"
        path => "/var/log/auth.log"
    }

    #nginx
    file {
        type => "nginx-access"
        path => "/var/log/nginx/access.log"
    }
    file {
        type => "nginx-error"
        path => "/var/log/nginx/error.log"
    }
}
filter {
    if [type] == "apache-access" {
        grok {
            match => [ "message", "%{HTTP_ACC}" ]
        }
    }
    if [type] == "apache-error" {
        grok {
            match => [ "message", "%{APA_ERR}" ]
        }
    }
    if [type] == "authentication" {
        grok {
            match => [ "message", "%{AUTH_LOG}" ]
        }
    }
    if [type] == "nginx-access" {
        grok {
            match => [ "message", "%{HTTP_ACC}" ]
        }
    }
    if [type] == "nginx-error" {
        grok {
            match => [ "message", "%{NGINX_ERR}" ]
        }
    }
}
output {
    elasticsearch {
        embedded => true
    }
}

padrões grok são os padrões + os personalizados abaixo. Os groks foram criados usando

link

link

os groks de erro não são perfeitos. erros de análise de grok em execução para que eles não sejam úteis para visualização em kibana. Veja os registros de demonstração abaixo dos padrões para exemplos aos quais eles devem ser correspondidos.

#APACHE ERROR
APA_ERR_TS \[%{DAY:dayOfTheWeek} %{MONTH:month} %{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:min}:%{SECOND:sec}\.%{INT:microsec} %{YEAR:year}\]
APA_ERR_LOGCODE \[%{GREEDYDATA:source}:%{LOGLEVEL:loglevel}\]
APA_ERR_PID_TID \[pid %{INT:ProccessID}:tid %{INT:ThreadID}\]
APA_ERR %{APA_ERR_TS} %{APA_ERR_LOGCODE} %{APA_ERR_PID_TID} %{GREEDYDATA:logMessage}

#APACHE and NGINX ACCESS (they share the same structure, not log file, on my server)
HTTP_SOURCE %{IPORHOST:clientID} %{USER:ident} %{USER:auth}
HTTP_TS %{MONTHDAY:day}/%{MONTH:month}/%{YEAR:year}:%{HOUR:hour}:%{MINUTE:min}:%{SECOND:sec}
HTTP_REQ_INFO "%{WORD:action} %{NOTSPACE:request} HTTP/%{NUMBER:httpver}" %{NUMBER:httpCode} (?:%{NUMBER:fileSizeInBytes}|-)
HTTP_ACC %{HTTP_SOURCE} \[%{HTTP_TS} %{ISO8601_TIMEZONE:UTC}\] %{HTTP_REQ_INFO} "%{GREEDYDATA:referrer}" "%{GREEDYDATA:clientInfo}"

#authentication log
AUTH_LOG_TS %{MONTH:month} %{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:min}:%{SECOND:sec}
AUTH_LOG_TYPE %{NOTSPACE:type}(\[%{INT:pid}\])?:
AUTH_LOG %{AUTH_LOG_TS} %{HOST:hostname} %{AUTH_LOG_TYPE} %{GREEDYDATA:logMessage}

#NGINX ERROR
NGINX_ERR_TS %{YEAR:year}/%{MONTHNUM:month}/%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:min}:%{SECOND:sec}
NGINX_ERR %{NGINX_ERR_TS} \[%{LOGLEVEL:loglevel}\] %{GREEDYDATA:logMessage}

Registros de demonstração

# auth.log
Feb 17 08:25:55 server systemd-logind[731]: Watching system buttons on /dev/input/event1 (Sleep Button)
Feb 17 08:25:58 server sshd[894]: Server listening on 0.0.0.0 port 22.
Feb 17 08:25:58 server sshd[894]: Server listening on :: port 22.
Feb 17 08:26:35 server sshd[1328]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.200.251  user=dude
Feb 17 08:26:37 server sshd[1328]: Failed password for dude from 192.168.200.251 port 49194 ssh2
Feb 17 08:26:40 server sshd[1328]: Accepted password for dude from 192.168.200.251 port 49194 ssh2
Feb 17 08:26:40 server sshd[1328]: pam_unix(sshd:session): session opened for user dude by (uid=0)
Feb 17 08:26:40 server systemd-logind[731]: New session 1 of user dude.
Feb 17 09:17:01 server CRON[1626]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 17 09:17:01 server CRON[1626]: pam_unix(cron:session): session closed for user root
Feb 17 10:17:01 server CRON[1631]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 17 10:17:01 server CRON[1631]: pam_unix(cron:session): session closed for user root
Feb 17 10:33:33 server sudo:      dude : TTY=pts/0 ; PWD=/home/dude/ls-scripts ; USER=root ; COMMAND=/usr/bin/nano /opt/logstash-1.4.2/patterns/grok-patterns

# apache access
192.168.200.251 - - [16/Feb/2015:15:50:04 +0100] "GET /icons/ubuntu-logo.png HTTP/1.1" 304 179 "http://192.168.200.11/" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0"
192.168.200.251 - - [16/Feb/2015:15:50:04 +0100] "GET / HTTP/1.1" 200 3593 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0"

# apache error
[Mon Feb 16 15:49:48.470722 2015] [mpm_event:notice] [pid 3601:tid 139755574097792] AH00491: caught SIGTERM, shutting down
[Mon Feb 16 15:49:49.597220 2015] [mpm_event:notice] [pid 5580:tid 140317488961408] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations
[Mon Feb 16 15:49:49.597302 2015] [core:notice] [pid 5580:tid 140317488961408] AH00094: Command line: '/usr/sbin/apache2'
[Mon Feb 16 16:20:19.948819 2015] [mpm_event:notice] [pid 5580:tid 140317488961408] AH00491: caught SIGTERM, shutting down
[Mon Feb 16 20:39:22.911352 2015] [mpm_event:notice] [pid 1059:tid 139877818292096] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations
[Mon Feb 16 20:39:22.923442 2015] [core:notice] [pid 1059:tid 139877818292096] AH00094: Command line: '/usr/sbin/apache2'
[Mon Feb 16 23:40:32.462678 2015] [mpm_event:notice] [pid 1059:tid 139877818292096] AH00491: caught SIGTERM, shutting down
[Tue Feb 17 08:26:03.727153 2015] [mpm_event:notice] [pid 1080:tid 140037385963392] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations
[Tue Feb 17 08:26:03.743771 2015] [core:notice] [pid 1080:tid 140037385963392] AH00094: Command line: '/usr/sbin/apache2'

# NGINX access
192.168.200.251 - - [16/Feb/2015:23:15:44 +0100] "GET /favicon.ico HTTP/1.1" 404 151 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0"
192.168.200.251 - - [16/Feb/2015:23:15:47 +0100] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0"

# NGINX error
2015/02/16 12:58:51 [emerg] 3752#0: bind() to 0.0.0.0:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to [::]:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to 0.0.0.0:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to [::]:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to 0.0.0.0:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to [::]:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to 0.0.0.0:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to [::]:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to 0.0.0.0:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: bind() to [::]:80 failed (98: Address already in use)
2015/02/16 12:58:51 [emerg] 3752#0: still could not bind()

Se informações adicionais forem necessárias, informe.

As outras questões em mãos:

  • como limpar a história? = > forçar uma análise completa de todos os arquivos de log

  • sugestões sobre melhores práticas e melhorias dos groks

Obrigado antecipadamente

    
por jonny8bit 18.02.2015 / 00:09

1 resposta

0

Para reimportar data, remova o arquivo $ HOME / .sincedb, remova os índices na pesquisa elástica e defina start_position para o começo:

 input {
 #apache
 file {
     type => "apache-access"
     path => "/var/log/apache2/access.log"
     start_position => beginning
 }
    
por 18.02.2015 / 11:57