Ligação do Logstash para uma porta já em uso

5

Esta é a saída quando tento executar o logstash. Com o Redis e o ElasticSearch desativados, ele ainda diz o endereço já em uso. Alguma sugestão? Tanto quanto eu posso dizer isso foi corrigido em 1.1.8, mas parece que ainda tenho esse problema. link

root@logs:~# java -jar logstash-1.1.13-flatjar.jar web --backend elasticsearch://127.0.0.1/
parse
logfile
thread
remaining
PORT SETTINGS 127.0.0.1:9300
 INFO 10:52:13,532 [Styx and Stone] {0.20.6}[26710]: initializing ...
DEBUG 10:52:13,544 [Styx and Stone] using home [/root], config [/root/config], data [[/root/data]], logs [/root/logs], work [/root/work], plugins [/root/plugins]
 INFO 10:52:13,557 [Styx and Stone] loaded [], sites []
DEBUG 10:52:13,581 using [UnsafeChunkDecoder] decoder
DEBUG 10:52:18,206 [Styx and Stone] creating thread_pool [generic], type [cached], keep_alive [30s]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [index], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [bulk], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [get], type [cached], keep_alive [5m]
DEBUG 10:52:18,226 [Styx and Stone] creating thread_pool [search], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [percolate], type [cached], keep_alive [5m]
DEBUG 10:52:18,227 [Styx and Stone] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [flush], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [merge], type [scaling], min [1], size [20], keep_alive [5m]
DEBUG 10:52:18,237 [Styx and Stone] creating thread_pool [refresh], type [scaling], min [1], size [10], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [cache], type [scaling], min [1], size [4], keep_alive [5m]
DEBUG 10:52:18,238 [Styx and Stone] creating thread_pool [snapshot], type [scaling], min [1], size [5], keep_alive [5m]
DEBUG 10:52:18,258 [Styx and Stone] using worker_count[2], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/6/1], receive_predictor[512kb->512kb]
DEBUG 10:52:18,266 [Styx and Stone] using initial hosts [127.0.0.1:9300], with concurrent_connects [10]
DEBUG 10:52:18,290 [Styx and Stone] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
DEBUG 10:52:18,290 [Styx and Stone] using minimum_master_nodes [-1]
DEBUG 10:52:18,291 [Styx and Stone] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,294 [Styx and Stone] [node  ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
DEBUG 10:52:18,315 [Styx and Stone] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, ParNew=GcThreshold{name='ParNew', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, ConcurrentMarkSweep=GcThreshold{name='ConcurrentMarkSweep', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.os.JmxOsProbe@e39275b] with refresh_interval [1s]
DEBUG 10:52:18,317 [Styx and Stone] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe@41afc692] with refresh_interval [1s]
DEBUG 10:52:18,320 [Styx and Stone] Using refresh_interval [1s]
DEBUG 10:52:18,321 [Styx and Stone] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe@3cef237e] with refresh_interval [5s]
DEBUG 10:52:18,323 [Styx and Stone] net_info
host [logs.lbox.com]
eth0    display_name [eth0]
        address [/fe80:0:0:0:20c:29ff:fee5:aa11%2] [/10.0.1.18] 
        mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo  display_name [lo]
        address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1] 
        mtu [16436] multicast [false] ptp [false] loopback [true] up [true] virtual [false]

DEBUG 10:52:18,324 [Styx and Stone] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe@33f0e611] with refresh_interval [1s]
DEBUG 10:52:18,560 [Styx and Stone] using indices.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]
DEBUG 10:52:18,566 [Styx and Stone] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
DEBUG 10:52:18,579 [Styx and Stone] using script cache with max_size [500], expire [null]
DEBUG 10:52:18,602 [Styx and Stone] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
DEBUG 10:52:18,603 [Styx and Stone] using [cluster_concurrent_rebalance] with [2]
DEBUG 10:52:18,606 [Styx and Stone] using initial_shards [quorum], list_timeout [30s]
DEBUG 10:52:18,689 [Styx and Stone] using max_size_per_sec[0b], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
DEBUG 10:52:18,757 [Styx and Stone] using index_buffer_size [48.5mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
DEBUG 10:52:18,758 [Styx and Stone] using [node] weighted filter cache with size [20%], actual_size [97mb], expire [null], clean_interval [1m]
DEBUG 10:52:18,775 [Styx and Stone] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
DEBUG 10:52:18,781 [Styx and Stone] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
 INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: initialized
 INFO 10:52:18,782 [Styx and Stone] {0.20.6}[26710]: starting ...
DEBUG 10:52:18,823 Using select timeout of 500
DEBUG 10:52:18,824 Epoll-bug workaround enabled = false
DEBUG 10:52:19,336 [Styx and Stone] Bound to address [/0:0:0:0:0:0:0:0:9302]
 INFO 10:52:19,338 [Styx and Stone] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.0.1.18:9302]}
DEBUG 10:52:19,379 [Styx and Stone] connected to node [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,363 [Styx and Stone] disconnected from [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
DEBUG 10:52:22,364 [Styx and Stone] filtered ping responses: (filter_client[true], filter_data[false])
    --> target [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,371 [Styx and Stone] connected to node [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]]
DEBUG 10:52:22,388 [Styx and Stone] [master] starting fault detection against master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]], reason [initial_join]
DEBUG 10:52:22,392 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: execute
DEBUG 10:52:22,393 [Styx and Stone] got first state from fresh master [V8QRcyhkSRex16_Lq8r5kA]
DEBUG 10:52:22,393 [Styx and Stone] cluster state updated, version [7], source [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]
 INFO 10:52:22,394 [Styx and Stone] detected_master [Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]], added {[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]],}, reason: zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])
 INFO 10:52:22,397 [Styx and Stone] elasticsearch/25UYvHAGTNKX3AezvVWEzA
 INFO 10:52:22,398 [Styx and Stone] {0.20.6}[26710]: started
DEBUG 10:52:22,404 [Styx and Stone] processing [zen-disco-receive(from master [[Her][V8QRcyhkSRex16_Lq8r5kA][inet[/10.0.1.18:9300]]])]: done applying updated cluster_state
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (Errno::EADDRINUSE) bind - Address already in use
    at org.jruby.ext.socket.RubyTCPServer.initialize(org/jruby/ext/socket/RubyTCPServer.java:118)
    at org.jruby.RubyIO.new(org/jruby/RubyIO.java:879)
    at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:50)
    at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
    at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:46)
    at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
    at RUBY.initialize(jar:file:/root/logstash-1.1.13-flatjar.jar!/ftw/server.rb:34)
    at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:94)
    at RUBY.run(jar:file:/root/logstash-1.1.13-flatjar.jar!/rack/handler/ftw.rb:66)
    at RUBY.run(file:/root/logstash-1.1.13-flatjar.jar!/logstash/web/runner.rb:68)
    
por David Neudorfer 15.07.2013 / 20:51

6 respostas

8

Eu mesmo tive um problema semelhante esta noite. O que eu descobri é que concatenei os arquivos de configuração juntos na pasta conf.d para investigar outra questão e esqueci-me deles. Quando a pasta conf.d / foi reavaliada no reinício, tentou ligar a porta duas vezes, causando o EADDRINUSE.

    
por 15.10.2014 / 02:32
3

Eu experimentei um erro "Endereço já em uso" na segunda vez que instalei o Logstash. Este erro surgiu quando eu tinha de alguma forma iniciado várias instâncias do Logstash. Parando manualmente os processos do Logstash e, em seguida, iniciando o Logstash, resolvi novamente meu problema.

    
por 17.03.2014 / 18:54
1

tente interromper o serviço logstash-web antes

no ubuntu sudo service logstash-web stop

    
por 30.01.2015 / 11:51
1

Eu também enfrentei o mesmo problema, /etc/init.d/logstash não conseguiu parar o daemon. Eu tive que matá-lo manualmente e reiniciar os serviços.

root@vikas027:~# ps -aef | grep  [l]ogstash
logstash  3752     1 37 02:55 pts/0    00:00:34 /usr/bin/java -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.    awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xmx500m -Xss2048k -Djffi.boot.library.    path=/opt/logstash/vendor/jruby/lib/jni -Djava.io.tmpdir=/var/lib/logstash -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -    XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -    Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9     /opt/logstash/lib/logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log
root@vikas027:~# kill -9 3752
root@vikas027:~# /etc/init.d/logstash start
    
por 25.06.2015 / 19:00
1

Eu tive o mesmo problema, mas ainda outra causa. Eu usei o emacs para criar o arquivo conf logstash e também criei um arquivo de backup quando a conexão ssh expirou. Como resultado, acabei com 2 dos mesmos arquivos .conf:

Original: 10-logs.conf

Backup do Emacs: # 10-logs.conf #

O Logstash estava tentando carregar os dois arquivos .conf e ligar-se à mesma porta duas vezes, resultando em um erro EADDRINUSE.

    
por 22.07.2016 / 17:56
0

deixe-me compartilhar minha experiência: meu logstash.conf.bak estava sendo avaliado e quebrou tudo. Verifique se você não tem nenhum arquivo com nome semelhante.

    
por 09.05.2018 / 17:34

Tags