Elasticsearch

ElasticSearch 延遲索引

  • January 9, 2016

我目前有以下設置:

syslog-ng 伺服器 –> Logstash –> ElasticSearch

syslog-ng 伺服器是負載平衡的,並寫入到 SAN 位置,Logstash 只是在該位置跟踪文件並將它們發送到 ES。我目前正在接收大約 1,300 個事件/秒到 syslog 集群的網路日誌。我遇到的問題是日誌在 ES 中實際可搜尋的時間逐漸延遲。當我啟動集群(4 個節點)時,它已經死了。然後落後幾分鐘,現在 4 天后落後了約 35 分鐘。我可以確認日誌正在 syslog-ng 伺服器上實時寫入,我還可以確認我的 4 個使用相同概念但不同 Logstash 實例的其他索引保持最新。但是,它們要低得多(約 500 個事件/秒)。

似乎正在讀取平面文件的 Logstash 實例無法跟上。我已經將這些文件分離了一次並生成了 2 個 Logstash 實例以提供幫助,但我仍然落後。

任何幫助將不勝感激。

典型的輸入是 ASA 日誌,主要是拒絕和 VPN 連接

Jan  7 00:00:00 firewall1.domain.com Jan 06 2016 23:00:00 firewall1 : %ASA-1-106023: Deny udp src outside:192.168.1.1/22245 dst DMZ_1:10.5.1.1/33434 by access-group "acl_out" [0x0, 0x0]
Jan  7 00:00:00 firewall2.domain.com %ASA-1-106023: Deny udp src console_1:10.1.1.2/28134 dst CUSTOMER_094:2.2.2.2/514 by access-group "acl_2569" [0x0, 0x0]

這是我的 Logstash 配置。

input {

file {
   type => "network-syslog"
   exclude => ["*.gz"]
   start_position => "end"
   path => [ "/location1/*.log","/location2/*.log","/location2/*.log"]
   sincedb_path => "/etc/logstash/.sincedb-network"
 }
}

filter {
   grok {
     overwrite => [ "message", "host" ]
     patterns_dir => "/etc/logstash/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns"
     match => [
       "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:host} %%{CISCOTAG:ciscotag}: %{GREEDYDATA:message}",
       "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:host} %{GREEDYDATA:message}"
     ]
    }
   grok {
     match => [
       "message", "%{CISCOFW106001}",
       "message", "%{CISCOFW106006_106007_106010}",
       "message", "%{CISCOFW106014}",
       "message", "%{CISCOFW106015}",
       "message", "%{CISCOFW106021}",
       "message", "%{CISCOFW106023}",
       "message", "%{CISCOFW106100}",
       "message", "%{CISCOFW110002}",
       "message", "%{CISCOFW302010}",
       "message", "%{CISCOFW302013_302014_302015_302016}",
       "message", "%{CISCOFW302020_302021}",
       "message", "%{CISCOFW305011}",
       "message", "%{CISCOFW313001_313004_313008}",
       "message", "%{CISCOFW313005}",
       "message", "%{CISCOFW402117}",
       "message", "%{CISCOFW402119}",
       "message", "%{CISCOFW419001}",
       "message", "%{CISCOFW419002}",
       "message", "%{CISCOFW500004}",
       "message", "%{CISCOFW602303_602304}",
       "message", "%{CISCOFW710001_710002_710003_710005_710006}",
       "message", "%{CISCOFW713172}",
       "message", "%{CISCOFW733100}",
       "message", "%{GREEDYDATA}"
     ]
   }
   syslog_pri { }
   date {
     "match" => [ "syslog_timestamp", "MMM  d HH:mm:ss",
                  "MMM dd HH:mm:ss" ]
     target => "@timestamp"
   }
   mutate {
     remove_field => [ "syslog_facility", "syslog_facility_code", "syslog_severity", "syslog_severity_code"]
   }
}

output {
   elasticsearch {
   hosts => ["server1","server2","server3"]
   index => "network-%{+YYYY.MM.dd}"
   template => "/etc/logstash/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/elasticsearch-network.json"
   template_name => "network"
}
}

-w N可以使用命令行選項告訴 LS 為每個實例啟動更多工作程序,其中 N 是一個數字。

這應該會大大增加您的事件吞吐量。

我不知道你的確切伺服器佈局,但啟動一半的工作人員可能是安全的,因為你的 LS 盒子上有核心,但要根據它執行的其他功能進行調整。

引用自:https://serverfault.com/questions/747161