ELK 安裝配置

怪咖_OOP發表於2018-07-17

Elasticsearch

下載:https://www.elastic.co/cn/downloads/elasticsearch

配置:(config/elasticsearch.yml)

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#配置的叢集名稱,預設是elasticsearch,es服務會通過廣播方式自動連線在同一網段下的es服務,通過多播方式進行通訊,同一網段下可以有多個叢集,通過叢集名稱這個屬性來區分不同的叢集。
cluster.name: my-application 

#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#節點名稱
node.name: slave-1
#指定該節點是否有資格被選舉成為node(注意這裡只是設定成有資格, 不代表該node一定就是master),預設是true,es是預設叢集中的第一臺機器為master,如果這臺機掛了就會重新選舉master。
node.master: true
#指定該節點是否儲存索引資料,預設為true。
node.data: true 
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#es資料儲存位置
path.data: /path/to/data
#
# Path to log files:
#
#es日誌儲存位置
path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# 
#本機內網IP
network.host: 192.168.1.26
#
# Set a custom port for HTTP:
#
http.port: 9200 
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#設定新節點被啟動時能夠發現的主節點列表(主要用於不同網段機器連線)
discovery.zen.ping.unicast.hosts: ["192.168.1.200"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#設定一個叢集中主節點的數量,當多於三個節點時,該值可在2-4之間
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#設定允許http跨域訪問
http.cors.enabled: true
http.cors.allow-origin: "*"

複製程式碼

錯誤處理:

[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
複製程式碼
修改/etc/security/limits.conf
* hard nofile 65536
複製程式碼
修改/etc/security/limits.d/*-nproc.conf
*          soft    nproc     65536
複製程式碼
修改/etc/sysctl.conf
新增配置:vm.max_map_count=262144,然後執行命令 sysctl -p
複製程式碼

Logstash

下載:https://www.elastic.co/downloads/logstash

配置:

logstash.conf
input {
    
    # tcp {
    #     port => 5599
    #     type => ofbank-mining
    # }
    
    # filebeats配置
    beats{
        port => 5599
    }
}

filter {

    grok {
        match => [
        "message","%{TIMESTAMP_ISO8601:time} (?<code_info>[a-zA-Z0-9.]+):%{BASE10NUM:code_line} (?:[a-zA-Z0-9.()*]+) >>> %{WORD:log_lavel} %{BASE10NUM} %{GREEDYDATA:content}"
        ]
        overwrite => ["message"]
    }

    date {
        match => ["time","yyyy-MM-dd HH:mm:ss"]
    }

    mutate {
        remove_field=> ["@version","TYPE","ACTION"]
    }
}

output {

    if [fields][log_topics] == "ofbank_mining_web" {
        elasticsearch {
            hosts => ["192.168.1.160:9200","192.168.1.81:9200"]
            index => "ofbank-mining-log-%{+YYYY-MM-dd}"
            manage_template => false
            template_name => "ofbank"
            template => "/home/elasticsearch/logstash/template/ofbank-template.json"
            template_overwrite => true
        }
    }
    
    
}
複製程式碼
template.json
{
  "crawl": {
    "template": "app-*",
    "settings": {
      "index.number_of_shards": 3,
      "number_of_replicas": 1
    },
    "mappings": {
      "logs": {
        "properties": {
          "time": {
            "type": "string",
            "index": "not_analyzed"
          },
          "code_info": {
            "type": "string",
            "index": "not_analyzed"
          },
          "code_line": {
            "type": "string",
            "index": "not_analyzed"
          },
          "log_lavel": {
            "type": "string",
            "index": "not_analyzed"
          },
          "content": {
            "type": "string"
          }
        }
      }
    }
  }
}
複製程式碼

Filebeat配置

filebeat.prospectors:

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # 讀取的檔案路徑
    - /mnt/logs/mining-10010.log

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
    log_topics: ofbank_mining_web



#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3

#================================ Outputs =====================================


output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.160:5599"]

複製程式碼

Kibana配置

server.port: 5601
#本地IP
server.host: "192.168.1.81"
elasticsearch.url: "http://192.168.1.81:9200"
複製程式碼

以上屬於原創文章,轉載請註明作者@怪咖

QQ:208275451

相關文章