Monitoring

使用 Logstash 作為托運人?

  • July 17, 2014

我們正在從伺服器發送日誌,並在每台伺服器上使用 THE logstash 進行發送。

所以我們從 glob 讀取日誌"/root/Desktop/Logstash-Input/**/*_log"

input {
           file{
                   path => "/root/Desktop/Logstash-Input/**/*_log"
                   start_position => "beginning"
           }
   }

從這個 glob 中,我們提取要添加到事件中的欄位。path例如:從目錄中path提取server,logtype等。我們這樣做:

filter {

grok {

       match => ["path", "/root/Desktop/Logstash-Input/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logdate>[\d]+.[\d]+.[\d]+)/(?<logfilename>.*)_log"]
}
}

lumberjack然後我們使用輸出外掛將這些日誌輸出到中央logstash伺服器。

output {

       lumberjack {
               hosts => ["xx.xx.xx.xx"]
               port => 4545
               ssl_certificate => "./logstash.pub"
   }

       stdout { codec => rubydebug }
}

問題是發送到中央伺服器的日誌會失去使用grok. 例如serverlogtype中央伺服器上不存在 , 等。但是,客戶端電腦控制台顯示添加的欄位,但僅在中央 logstash 伺服器上存在 , messagetimestamp``version

客戶端(從哪裡運送日誌)控制台:

output received {:event=>{"message"=>"2014-05-26T00:00:01+05:30 host crond[268]: (root) CMD (2014/05/31/server2/cron/log)", "@version"=>"1", "@timestamp"=>"2014-07-16T06:07:21.927Z", "host"=>"host", "path"=>"/root/Desktop/Logstash-Input/Server2/CronLog/2014.05.31/cron_log", "server"=>"Server2", "logtype"=>"CronLog", "logdate"=>"2014.05.31", "logfilename"=>"cron"}, :level=>:debug, :file=>"(eval)", :line=>"37"}
   {
             "message" => "2014-05-26T00:00:01+05:30 bx920as1 crond[268]: (root) CMD (2014/05/31/server2/cron/log)",
            "@version" => "1",
          "@timestamp" => "2014-07-16T06:07:21.927Z",
                "host" => "host",
                "path" => "/root/Desktop/Logstash-Input/Server2/CronLog/2014.05.31/cron_log",
              "server" => "Server2",
             "logtype" => "CronLog",
             "logdate" => "2014.05.31",
         "logfilename" => "cron"
   }

中央伺服器(發送日誌的地方)控制台

{
      "message" => "2014-07-16T05:33:17.073+0000 host 2014-05-26T00:00:01+05:30 bx920as1 crond[288]: (root) CMD (2014/05/31/server2/cron/log)",
     "@version" => "1",
   "@timestamp" => "2014-07-16T05:34:02.370Z"
}

因此,在運輸時會丟棄 grokked 欄位。為什麼會這樣??

我怎樣才能保留這些領域?

解決了:

我通過添加codec => "json"到我的伐木工人輸出和輸入來解決它。

輸出:

output {

   lumberjack {
           hosts => ["xx.xx.xx.xx"]
           port => 4545
           ssl_certificate => "./logstash.pub"
           codec => "json"
}

輸入:

input { 
   lumberjack {
       port => 4545
       ssl_certificate => "/etc/ssl/logstash.pub"
       ssl_key => "/etc/ssl/logstash.key"  
       codec => "json"
 }
}

引用自:https://serverfault.com/questions/612877