安裝部署ELK系統監控Azure China的NSG和WAF Log

衡子發表於2018-02-13

ELK是一個開源的產品,其官網是:https://www.elastic.co/

ELK主要保護三個產品:

  1. Elasticsearch:是基於 JSON 的分散式搜尋和分析引擎,專為實現水平擴充套件、高可用和管理便捷性而設計。
  2. Logstash :是動態資料收集管道,擁有可擴充套件的外掛生態系統,能夠與 Elasticsearch 產生強大的協同作用。
  3. Kibana :能夠以圖表的形式呈現資料,並且具有可擴充套件的使用者介面,供您全方位配置和管理。

本文將介紹ELK三個元件的安裝和配置,並介紹如何通過ELK監控Azure China的NSG Log。具體的拓撲結構如下:

最左邊的Azure China上開啟了Network Watcher功能,NSG的各種日誌資訊將傳送到Azure Storage儲存賬戶。

中間是ELK元件,包括上面提到的Logstash,並安裝了Azure Blob的外掛,Logstash會從指定的Azure儲存賬戶獲得NSG的log檔案。Logstash把log獲取後,以一定的格式傳送到兩臺Elastic Search組成的叢集。Kibana

一 環境準備

1 安裝Java環境

本環境安裝的是1.8.0的jdk:

yum install -y java-1.8.0-openjdk-devel

2 修改hosts檔案

echo "10.1.4.4 node1" >> /etc/hosts
echo "10.1.5.4 node2" >> /etc/hosts

3 修改iptables和selinux

iptables -F
setenforce 0

二 Elasticsearch的安裝和配置

1 新增YUM源

匯入key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

新增YUM原始檔:

vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

2 安裝elasticsearch

yum install elasticsearch -y
systemctl enable elasticsearch

3 配置elasticsearch

編輯配置檔案:

vim /etc/elasticsearch/elasticsearch.yml
cat ./elasticsearch.yml | grep -v "#"
cluster.name: es-cluster
node.name: node1
path.data: /var/lib/elasticsearch
path.logs: /var/lib/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
discovery.zen.minimum_master_nodes: 2

在node2上的node.name配置成node2

4 啟動elasticsearch

systemctl start elasticsearch
systemctl status elasticsearch

可以看到訪問是running狀態。

通過netstat -tunlp檢視ES的埠9200和9300是否啟動:

通過下面的命令檢視節點資訊:

curl -XGET 'http://10.1.4.4:9200/?pretty'
curl -XGET 'http://10.1.4.4:9200/_cat/nodes?v'

其中*號的標識是master節點。

當然通過瀏覽器也可以瀏覽相應的資訊:

日誌在配置檔案中定義的/var/lib/elasticsearch內,可以檢視這裡ES啟動是否正常。

三 logstash的安裝

Logstash是整個ELK安裝過程中比較複雜的一個。具體安裝配置過程如下:

1 安裝logstash

Logstash可以安裝在一個節點上,也可以安裝在多個節點上。本文將安裝在node1上:

yum install -y logstash
systemctl enable logstash
ln -s /usr/share/logstash/bin/logstash /usr/bin/logstash

2 配置logstash

vim /etc/logstash/logstash.yml
cat ./logstash.yml | grep -v "#"
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash

3 更改logstash相關檔案的許可權

在安裝的過程中會發現logstash安裝後,其檔案的許可權設定有問題,需要把相關的檔案和資料夾的設定正確:

chown logstash:logstash /var/log/logstash/ -R
chmod 777 /var/log/messages
mkdir -p /usr/share/logstash/config/
ln -s /etc/logstash/* /usr/share/logstash/config
chown -R logstash:logstash /usr/share/logstash/config/
mkdir -p /var/lib/logstash/queue
chown -R logstash:logstash /var/lib/logstash/queue

3 配置pipeline

Logstash的相關培訓和文件可以在elk的官網上找到,簡單來說,logstash包含input,filter和output幾個區域,其中input和output是必須配置的。

以官網教程https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html 為例:

/usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

將filebeat作為輸入元件的例子:

安裝filebeat:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm
rpm -vi filebeat-6.2.1-x86_64.rpm

下載demo檔案:

wget https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz
gzip -d logstash-tutorial.log.gz

配置filebeat:

vim /etc/filebeat/filebeat.yml
cat /etc/filebeat/filebeat.yml |grep -v "#"
filebeat.prospectors:
- type: log
  enabled: true
  paths:
- /var/log/ logstash-tutorial.log
  output.logstash:
  hosts: ["localhost:5044"]

配置pipeline檔案:

vim /etc/logstash/conf.d/logstash.conf
input
{
beats { port => "5044" }
}
output
{
file { path => "/var/log/logstash/output.out" }
stdout { codec => rubydebug }
}

測試配置:

cd /etc/logstash/conf.d
logstash -f logstash.conf --config.test_and_exit
logstash -f logstash.conf --config.reload.automatic

通過netstat -tunlp可以看到5044埠已經開啟,等待filebeatd 輸入。

執行filebeat:

filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

需要注意的是,在第二次執行時,需要刪除register檔案:

cd /var/lib/filebeat/
rm registry

此時可以看到logstash的console有日誌輸出,定義的檔案也有記錄。格式如下:

{
"@timestamp" => 2018-02-10T02:37:47.166Z,
"offset" => 24248,
"@version" => "1",
"beat" => {
"name" => "udrtest01",
"hostname" => "udrtest01",
"version" => "6.2.1"
},
"host" => "udrtest01",
"prospector" => {
"type" => "log"
},
"source" => "/var/log/logstash-tutorial.log",
"message" => "86.1.76.62 - - [04/Jan/2015:05:30:37 +0000] \"GET /reset.css HTTP/1.1\" 200 1015 \"http://www.semicomplete.com/projects/xdotool/\" \"Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20140205 Firefox/24.0 Iceweasel/24.3.0\"",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}

 

更改input,改為檔案輸入:

input
{
file { path => "/var/log/messages" }
beats { port => "5044" }
}

可以看到新增加的日誌會輸出到logstash的console,同時記錄到output.out檔案中。

三. Kibana的安裝和配置

1 Kibana的安裝

yum install kibana -y
systemctl enable kibana

2 配置Kibana

vim /etc/kibana/kibana.yml
cat /etc/kibana/kibana.yml -v "#"
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://10.1.5.4:9200"

啟動kibana

systemctl start kibana

3 檢視Kibana

通過VM的metadata檢視VM的PIP地址:

curl -H Metadata:true http://169.254.169.254/metadak?api-version=2017-08-01

查詢到公網IP地址後,在瀏覽器中瀏覽Kibana:

建立一個index,在discover中可以看到相關的日誌:

在Kibana的Dev Tools上可以檢視和刪除相關的資訊:

四 Logstash支援Azure Blob作為input檢視NSG Log

1 Logstash的Azure Blob外掛的安裝

具體的資訊請參考:

https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azureblob

安裝命令為:

logstash-plugin install logstash-input-azureblob

2 配置

根據上面連結的文件,把相關資訊填入。

其中需要填寫endpoint,把其指向China的endpoint:

vim /etc/logstash/conf.d/nsg.conf
input {
azureblob
{
storage_account_name => "xxxx"
storage_access_key => "xxxx"
container => "insights-logs-networksecuritygroupflowevent"
endpoint => "core.chinacloudapi.cn"
codec => "json"
file_head_bytes => 12
file_tail_bytes => 2
}
}
filter {
split { field => "[records]" }
split { field => "[records][properties][flows]"}
split { field => "[records][properties][flows][flows]"}
split { field => "[records][properties][flows][flows][flowTuples]"}
mutate {
split => { "[records][resourceId]" => "/"}
add_field => {"Subscription" => "%{[records][resourceId][2]}"
"ResourceGroup" => "%{[records][resourceId][4]}"
"NetworkSecurityGroup" => "%{[records][resourceId][8]}"}
convert => {"Subscription" => "string"}
convert => {"ResourceGroup" => "string"}
convert => {"NetworkSecurityGroup" => "string"}
split => { "[records][properties][flows][flows][flowTuples]" => ","}
add_field => {
"unixtimestamp" => "%{[records][properties][flows][flows][flowTuples][0]}"
"srcIp" => "%{[records][properties][flows][flows][flowTuples][1]}"
"destIp" => "%{[records][properties][flows][flows][flowTuples][2]}"
"srcPort" => "%{[records][properties][flows][flows][flowTuples][3]}"
"destPort" => "%{[records][properties][flows][flows][flowTuples][4]}"
"protocol" => "%{[records][properties][flows][flows][flowTuples][5]}"
"trafficflow" => "%{[records][properties][flows][flows][flowTuples][6]}"
"traffic" => "%{[records][properties][flows][flows][flowTuples][7]}"
}
convert => {"unixtimestamp" => "integer"}
convert => {"srcPort" => "integer"}
convert => {"destPort" => "integer"}
}
date{
match => ["unixtimestamp" , "UNIX"]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "nsg-flow-logs"
}
}

在Kibana上可以看到相關資訊:

五 Logstash支援Azure Blob作為input檢視WAF Log

類似的,把WAF的log傳送到Azure Storage中,命令為:

Set-AzureRmDiagnosticSetting -ResourceId /subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Network/applicationGateways/<application gateway name> -StorageAccountId /subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/<storage account name> -Enabled $true

在cu儲存賬戶有了日誌後,通過配置logstash,把日誌讀入logstash,再傳送給ES,在Kibana上展現。

Firewall log的配置為:

input {
azureblob
{
storage_account_name => "xxxxx"
storage_access_key => "xxxxxx"
container => "insights-logs-applicationgatewayaccesslog"
endpoint => "core.chinacloudapi.cn"
codec => "json"
}
}
filter {
date{
match => ["unixtimestamp" , "UNIX"]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "waf-access-logs"
}}

可以看到相關的資訊:

六總結

通過ELK工具,可以把Azure上的相關服務日誌進行圖形化的分析。

相關文章