Amazon-S3

將 cloudfront 日誌導入 logstash:錯誤:不是此包裝器的合法參數,因為它不響應“讀取”

  • February 8, 2020

Logstash 版本 1.5.0.1

我正在嘗試使用 logstash s3 輸入外掛來下載 cloudfront 日誌和cloudfront 編解碼器外掛來過濾流。

我使用bin/plugin install logstash-codec-cloudfront.

我得到以下資訊:錯誤:對象:#Version:1.0 不是此包裝器的合法參數,因為它不響應“讀取”。

這是來自 /var/logs/logstash/logstash.log 的完整錯誤消息

{:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n  Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}

我的 logstash 配置文件:/etc/logstash/conf.d/cloudfront.conf

input {
 s3 {
   bucket => "[BUCKETNAME]"
   delete => false
   interval => 60 # seconds
   prefix => "cloudfront/"
   region => "us-east-1"
   type => "cloudfront"
   codec => "cloudfront"
   secret_access_key => "[SECRETKEY]"
   access_key_id => "[KEYID]"
   sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
   backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
   use_ssl => true
 }
}

我正在成功使用類似的 s3 輸入流將我的 cloudtrail 日誌記錄到基於stackoverflow 文章中的答案的 logstash 中。

來自 s3 的 CloudFront 日誌文件(我只包含文件中的標頭):

#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type

根據 cloudfront 外掛 github repo cloudfront_spec.rb和官方 AWS CloudFront Access Logs文件中的第 26-29 行,標頭看起來基本上是正確的格式。

有任何想法嗎?謝謝!

$$ UPDATE 9/9/2015 $$ 根據這篇文章,我嘗試使用gzip_lines編解碼器外掛,安裝bin/plugin install logstash-codec-gzip_lines並使用過濾器解析文件,不幸的是我得到了完全相同的錯誤。看起來這是日誌文件的第一個字元的問題#

作為記錄,這是新的嘗試,包括由於四個新欄位而解析雲端日誌文件的更新模式:

/etc/logstash/conf.d/cloudfront.conf

input {
 s3 {
   bucket => "[BUCKETNAME]"
   delete => false
   interval => 60 # seconds
   prefix => "cloudfront/"
   region => "us-east-1"
   type => "cloudfront"
   codec => "gzip_lines"
   secret_access_key => "[SECRETKEY]"
   access_key_id => "[KEYID]"
   sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
   backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
   use_ssl => true
 }
}
filter {
   grok {
   type => "cloudfront"
   pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
 }

mutate {
   type => "cloudfront"
       add_field => [ "listener_timestamp", "%{date} %{time}" ]
   }

date {
     type => "cloudfront"
     match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
   }

}

我有同樣的問題,從

   codec > "gzip_lines"

   codec => "plain"

在輸入中為我修復了它。看起來 S3 輸入會自動解壓縮 gzip 文件。https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13

這在此處報告為錯誤https://github.com/logstash-plugins/logstash-codec-cloudfront/issues/2

自 2016 年以來未修復

引用自:https://serverfault.com/questions/711104