Docker版Grafana整合InfluxDB看這一篇就夠了(2020全網最詳細教程)

君哥聊程式設計發表於2020-10-31

本文分為4段為您詳細講解Docker版Grafana整合influxdb監控資料

docker安裝

解除安裝

如果之前安裝過Docker需要解除安裝可以參照如下命令

# 列出當前docker相關的安裝包
$ yum list installed|grep docker
containerd.io.x86_64                 1.3.7-3.1.el7                  @docker-ce-stable
docker-ce.x86_64                     3:19.03.13-3.el7               @docker-ce-stable
docker-ce-cli.x86_64                 1:19.03.13-3.el7               @docker-ce-stable

在這裡插入圖片描述

# 解除安裝對應的包
$ yum -y remove containerd.io.x86_64
$ yum -y remove docker-ce.x86_64
$ yum -y remove docker-ce-cli.x86_64 

安裝

注意:且Docker 要求作業系統必須為64位,且centos核心版本為3.1及以上

  • 檢視系統核心

    $ uname -r
    3.10.0-1062.el7.x86_6
    # 我這裡高於3.1
    
  • 保證yum包是最新

    # 使用root執行,更新到最新
    $ yum update
    
  • 列出可安裝的docker包

    # 列出可以按照的docker包
    $ yum list docker-ce --showduplicates | sort -r
    
  • 安裝

    • 指定版本安裝

      $ yum list docker-ce.x86_64  --showduplicates | sort -r
      
    • 直接安裝最新版

      $ yum install docker-ce -y
      
  • 檢視當前版本

    $ docker version
    
    Client: Docker Engine - Community
     Version:           19.03.13
     API version:       1.40
     Go version:        go1.13.15
     Git commit:        4484c46d9d
     Built:             Wed Sep 16 17:03:45 2020
     OS/Arch:           linux/amd64
     Experimental:      false
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    # 此處需要重啟
    
  • 不能連線到Docker daemon異常

    裝完後使用docker命令後會提示異常
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    需要重啟下docker
    
  • 重啟

    $ service docker restart
    
  • 配置開機啟動

    $ systemctl enable docker
    

國內映象配置

  1. 找到/etc/docker目錄下的daemon.json檔案進行編輯,輸入如下內容

    {
      "registry-mirrors": ["https://9cpn8tt6.mirror.aliyuncs.com"]
    }
    
  2. 如果沒有該檔案,可自行建立,也可以使用如下命令

    tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://9cpn8tt6.mirror.aliyuncs.com"]
    }
    EOF
    
  3. 重啟docker

Docker整合influxDB

前面我們已經學習了Docker的安裝和相關命令,接下來,我們只講解influxdb的內容

InfluxDB是一個由InfluxData開發的開源時序型資料。它由Go寫成,著力於高效能地查詢與儲存時序型資料。InfluxDB被廣泛應用於儲存系統的監控資料,IoT行業的實時資料等場景。

influxDB介紹

InfluxDB(時序資料庫),常用的一種使用場景:監控資料統計。每毫秒記錄一下電腦記憶體的使用情況,然後就可以根據統計的資料,利用圖形化介面(InfluxDB V1一般配合Grafana)製作記憶體使用情況的折線圖;

可以理解為按時間記錄一些資料(常用的監控資料、埋點統計資料等),然後製作圖表做統計;

與傳統資料庫中的名詞做比較

influxDB中的名詞傳統資料庫中的概念
database資料庫
measurement資料庫中的表
points表裡面的一行資料

InfluxDB中獨有的一些概念
Point由時間戳(time)、資料(field)、標籤(tags)組成。

Point屬性傳統資料庫中的概念
time每個資料記錄時間,是資料庫中的主索引(會自動生成)
fields各種記錄值(沒有索引的屬性)也就是記錄的值:溫度, 溼度
tags各種有索引的屬性:地區,海拔

influxDB安裝

  • 拉取最新版映象

    # 拉取最新版映象
    $ docker pull influxdb
    # 檢視映象
    $ docker images✨
    
  • 使用映象建立容器

    # 使用映象建立容器
    $ docker run -d -p  8083:8083 -p 8086:8086 --name myinfluxdb influxdb
    	-d 讓容器在後臺執行 
    	-p 8083:8083 將容器的 8083 埠對映到主機的 8083 埠
    	–-name 容器的名字,隨便取,但是必須唯一
    
  • 開放防火牆埠

    $ firewall-cmd --zone=public --add-port=8083/tcp --permanent
    $ firewall-cmd --zone=public --add-port=8086/tcp --permanent
    $ firewall-cmd --reload
    
  • 停止容器

    $ docker stop myinfluxdb
    
  • 移除容器

    # 移除的容器必須是已經停止的
    $ docker rm myinfluxdb
    
  • 檢視容器列表

    # 只檢視正在執行的
    $ docker ps
    
    # 檢視所有的
    $ docker ps -a
    
  • 進入容器內部

    # 該容器必須已經執行,才能進入
    $ docker exec -it myinfluxdb /bin/bash
    

influxDB配置

使用名進入到myinfluxdb容器內部後,我們來做一點小小的配置

  1. 進入influxdb命令互動模式,類似於mysql的命令列

    # 直接輸入influx
    $ influx
    Connected to http://localhost:8086 version 1.8.3
    InfluxDB shell version: 1.8.3
    > 
    
    
    # 如果上述報錯,採用下面這種方式,輸入/usr/bin/influx
    $ /usr/bin/influx
    
  2. 新增資料庫

    # 檢視現有資料庫
    > show databases;
    name: databases
    name
    ----
    _internal
    
    # 建立資料庫
    > create database mytest
    
    # 再次檢視你會發現有2個庫了
    > show databases;
    name: databases
    name
    ----
    _internal
    mytest
    
    # 使用資料庫
    > use mytest
    
    # 檢視使用者
    > show users;
    user admin
    ---- -----
    
  3. 建立一個使用者

    > CREATE USER "master" WITH PASSWORD 'abcd1234' WITH ALL PRIVILEGES
    > exit 退出
    
  4. influxdb預設沒有校驗許可權,修改influxdb.conf檔案

    # 在當前容器內執行
    $ vim /etc/influxdb/influxdb.conf
    # 此時你會發現vim命令不存在
    bash: vim: command not found
    
  5. 安裝vim命令

    # 在當前容器類執行(此步驟時間會比較長)
    $ apt-get update
    $ apt-get install vim
    
  6. 再次修改influxdb.conf檔案

    # 修改[http]處的auth-enabled屬性為true
    [http]
    ...
    auth-enabled = true
    

    注意有的版本配置檔案非常簡單,只有如下幾個配置:

    [meta]
      dir = "/var/lib/influxdb/meta"
    
    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"
    

    我這邊修改完後的配置檔案全內容如下:

    [meta]
      dir = "/var/lib/influxdb/meta"
    
    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"
    
    [http]
      enabled = true  
      bind-address = ":8086"  
      auth-enabled = true  # ✨ 此處預設是關閉的需要開啟,因為前面我們配置的使用者名稱密碼,所以需要開啟
      log-enabled = true  
      write-tracing = false  
      pprof-enabled = false  
      https-enabled = false 
    

    退出容器,重新啟動注意不要改錯,改錯了,容器就無法再起來了

    $ docker restart 
    

    其實最詳細的配置檔案如下:**

    ### Welcome to the InfluxDB configuration file.
    
    # Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
    # The data includes a random ID, os, arch, version, the number of series and other
    # usage data. No data from user databases is ever transmitted.
    # Change this option to true to disable reporting.
    reporting-disabled = false
    
    # we'll try to get the hostname automatically, but if it the os returns something
    # that isn't resolvable by other servers in the cluster, use this option to
    # manually set the hostname
    # hostname = "localhost"
    
    ###
    ### [meta]
    ###
    ### Controls the parameters for the Raft consensus group that stores metadata
    ### about the InfluxDB cluster.
    ###
    
    [meta]
      # Where the metadata/raft database is stored
      dir = "/var/lib/influxdb/meta"
    
      retention-autocreate = true
    
      # If log messages are printed for the meta service
      logging-enabled = true
      pprof-enabled = false
    
      # The default duration for leases.
      lease-duration = "1m0s"
    
    ###
    ### [data]
    ###
    ### Controls where the actual shard data for InfluxDB lives and how it is
    ### flushed from the WAL. "dir" may need to be changed to a suitable place
    ### for your system, but the WAL settings are an advanced configuration. The
    ### defaults should work for most systems.
    ###
    
    [data]
      # Controls if this node holds time series data shards in the cluster
      enabled = true
    
      dir = "/var/lib/influxdb/data"
    
      # These are the WAL settings for the storage engine >= 0.9.3
      wal-dir = "/var/lib/influxdb/wal"
      wal-logging-enabled = true
      
      # Trace logging provides more verbose output around the tsm engine. Turning 
      # this on can provide more useful output for debugging tsm engine issues.
      # trace-logging-enabled = false
    
      # Whether queries should be logged before execution. Very useful for troubleshooting, but will
      # log any sensitive data contained within a query.
      # query-log-enabled = true
    
      # Settings for the TSM engine
    
      # CacheMaxMemorySize is the maximum size a shard's cache can
      # reach before it starts rejecting writes.
      # cache-max-memory-size = 524288000
    
      # CacheSnapshotMemorySize is the size at which the engine will
      # snapshot the cache and write it to a TSM file, freeing up memory
      # cache-snapshot-memory-size = 26214400
    
      # CacheSnapshotWriteColdDuration is the length of time at
      # which the engine will snapshot the cache and write it to
      # a new TSM file if the shard hasn't received writes or deletes
      # cache-snapshot-write-cold-duration = "1h"
    
      # MinCompactionFileCount is the minimum number of TSM files
      # that need to exist before a compaction cycle will run
      # compact-min-file-count = 3
    
      # CompactFullWriteColdDuration is the duration at which the engine
      # will compact all TSM files in a shard if it hasn't received a
      # write or delete
      # compact-full-write-cold-duration = "24h"
    
      # MaxPointsPerBlock is the maximum number of points in an encoded
      # block in a TSM file. Larger numbers may yield better compression
      # but could incur a performance penalty when querying
      # max-points-per-block = 1000
    
    ###
    ### [coordinator]
    ###
    ### Controls the clustering service configuration.
    ###
    
    [coordinator]
      write-timeout = "10s"
      max-concurrent-queries = 0
      query-timeout = "0"
      log-queries-after = "0"
      max-select-point = 0
      max-select-series = 0
      max-select-buckets = 0
    
    ###
    ### [retention]
    ###
    ### Controls the enforcement of retention policies for evicting old data.
    ###
    
    [retention]
      enabled = true
      check-interval = "30m"
    
    ###
    ### [shard-precreation]
    ###
    ### Controls the precreation of shards, so they are available before data arrives.
    ### Only shards that, after creation, will have both a start- and end-time in the
    ### future, will ever be created. Shards are never precreated that would be wholly
    ### or partially in the past.
    
    [shard-precreation]
      enabled = true
      check-interval = "10m"
      advance-period = "30m"
    
    ###
    ### Controls the system self-monitoring, statistics and diagnostics.
    ###
    ### The internal database for monitoring data is created automatically if
    ### if it does not already exist. The target retention within this database
    ### is called 'monitor' and is also created with a retention period of 7 days
    ### and a replication factor of 1, if it does not exist. In all cases the
    ### this retention policy is configured as the default for the database.
    
    [monitor]
      store-enabled = true # Whether to record statistics internally.
      store-database = "_internal" # The destination database for recorded statistics
      store-interval = "10s" # The interval at which to record statistics
    
    ###
    ### [admin]
    ###
    ### Controls the availability of the built-in, web-based admin interface. If HTTPS is
    ### enabled for the admin interface, HTTPS must also be enabled on the [http] service.
    ###
    
    [admin]
      enabled = true
      bind-address = ":8083"
      https-enabled = false
      https-certificate = "/etc/ssl/influxdb.pem"
    
    ###
    ### [http]
    ###
    ### Controls how the HTTP endpoints are configured. These are the primary
    ### mechanism for getting data into and out of InfluxDB.
    ###
    
    [http]
      enabled = true
      bind-address = ":8086"
      auth-enabled = true
      log-enabled = true
      write-tracing = false
      pprof-enabled = false
      https-enabled = false
      https-certificate = "/etc/ssl/influxdb.pem"
      ### Use a separate private key location.
      # https-private-key = ""
      max-row-limit = 10000
      realm = "InfluxDB"
    
    ###
    ### [subsciber]
    ###
    ### Controls the subscriptions, which can be used to fork a copy of all data
    ### received by the InfluxDB host.
    ###
    
    [subsciber]
      enabled = true
      http-timeout = "30s"
    
    
    ###
    ### [[graphite]]
    ###
    ### Controls one or many listeners for Graphite data.
    ###
    
    [[graphite]]
      enabled = false
      # database = "graphite"
      # bind-address = ":2003"
      # protocol = "tcp"
      # consistency-level = "one"
    
      # These next lines control how batching works. You should have this enabled
      # otherwise you could get dropped metrics or poor performance. Batching
      # will buffer points in memory if you have many coming in.
    
      # batch-size = 5000 # will flush if this many points get buffered
      # batch-pending = 10 # number of batches that may be pending in memory
      # batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
      # udp-read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    
      ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
      # separator = "."
    
      ### Default tags that will be added to all metrics.  These can be overridden at the template level
      ### or by tags extracted from metric
      # tags = ["region=us-east", "zone=1c"]
    
      ### Each template line requires a template pattern.  It can have an optional
      ### filter before the template and separated by spaces.  It can also have optional extra
      ### tags following the template.  Multiple tags should be separated by commas and no spaces
      ### similar to the line protocol format.  There can be only one default template.
      # templates = [
      #   "*.app env.service.resource.measurement",
      #   # Default template
      #   "server.*",
      # ]
    
    ###
    ### [collectd]
    ###
    ### Controls one or many listeners for collectd data.
    ###
    
    [[collectd]]
      enabled = false
      # bind-address = ""
      # database = ""
      # typesdb = ""
    
      # These next lines control how batching works. You should have this enabled
      # otherwise you could get dropped metrics or poor performance. Batching
      # will buffer points in memory if you have many coming in.
    
      # batch-size = 1000 # will flush if this many points get buffered
      # batch-pending = 5 # number of batches that may be pending in memory
      # batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
      # read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    
    ###
    ### [opentsdb]
    ###
    ### Controls one or many listeners for OpenTSDB data.
    ###
    
    [[opentsdb]]
      enabled = false
      # bind-address = ":4242"
      # database = "opentsdb"
      # retention-policy = ""
      # consistency-level = "one"
      # tls-enabled = false
      # certificate= ""
      # log-point-errors = true # Log an error for every malformed point.
    
      # These next lines control how batching works. You should have this enabled
      # otherwise you could get dropped metrics or poor performance. Only points
      # metrics received over the telnet protocol undergo batching.
    
      # batch-size = 1000 # will flush if this many points get buffered
      # batch-pending = 5 # number of batches that may be pending in memory
      # batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
    
    ###
    ### [[udp]]
    ###
    ### Controls the listeners for InfluxDB line protocol data via UDP.
    ###
    
    [[udp]]
      enabled = false
      # bind-address = ""
      # database = "udp"
      # retention-policy = ""
    
      # These next lines control how batching works. You should have this enabled
      # otherwise you could get dropped metrics or poor performance. Batching
      # will buffer points in memory if you have many coming in.
    
      # batch-size = 1000 # will flush if this many points get buffered
      # batch-pending = 5 # number of batches that may be pending in memory
      # batch-timeout = "1s" # will flush at least this often even if we haven't hit buffer limit
      # read-buffer = 0 # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
    
      # set the expected UDP payload size; lower values tend to yield better performance, default is max UDP size 65536
      # udp-payload-size = 65536
    
    ###
    ### [continuous_queries]
    ###
    ### Controls how continuous queries are run within InfluxDB.
    ###
    
    [continuous_queries]
      log-enabled = true
      enabled = true
      # run-interval = "1s" # interval for how often continuous queries will be checked if they need to run
    
  7. 退出容器,重新啟動注意不要改錯,改錯了,容器就無法再起來了

    $ docker restart myinfluxdb
    
  8. 再次進入容器,並使用命令進行influx操作

    root@5f1bb39363e6:/# influx     
    Connected to http://localhost:8086 version 1.8.3
    InfluxDB shell version: 1.8.3
    > show users
    ERR: unable to parse authentication credentials
    Warning: It is possible this error is due to not setting a database.
    Please set a database with the command "use <database>".
    > 
    

    上述提示許可權校驗錯誤,接下來我們exit退出當前influx互動(不要退出容器),再次使用使用者密碼登入

    root@5f1bb39363e6:/# influx -username 'master' -password 'abcd1234'
    Connected to http://localhost:8086 version 1.8.3
    InfluxDB shell version: 1.8.3
    > show users
    user   admin
    ----   -----
    master true
    

    上述登入成功,並且能夠使用show users語句

先切換到我們建立的mytest資料庫

> use mytest

資料插入

由於InfluxDB的無結構(schemeless)特性,我們不需要預先建表,直接use [ database ]後就可以寫入資料了。舉個例子。

INSERT cpu,host=serverA,region=us_west value=0.64
 
INSERT temperature,machine=unit42,type=assembly external=25,internal=37

讀資料

SELECT "host", "region", "value" FROM "cpu"
SELECT * FROM "temperature"

-- measurement都可以用正則表示,下面表示讀一個db下的所有measurement的資料
SELECT * FROM /.*/
-- 配上where條件
SELECT "region", "value" FROM "cpu" where "host" = "server1"

在這裡插入圖片描述

客戶端工具

下載地址:
連結:https://pan.baidu.com/s/1FBFRc2fPkmDoHDYjdNgntA
提取碼:s4ut

在這裡插入圖片描述

在這裡插入圖片描述

常用InfluxQL

-- 檢視所有的資料庫
show databases;
-- 使用特定的資料庫
use database_name;
-- 檢視所有的measurement
show measurements;
-- 查詢10條資料
select * from measurement_name limit 10;
-- 資料中的時間欄位預設顯示的是一個納秒時間戳,改成可讀格式
precision rfc3339; -- 之後再查詢,時間就是rfc3339標準格式
-- 或可以在連線資料庫的時候,直接帶該引數
influx -precision rfc3339
-- 檢視一個measurement中所有的tag key 
show tag keys
-- 檢視一個measurement中所有的field key 
show field keys
-- 檢視一個measurement中所有的儲存策略(可以有多個,一個標識為default)
show retention policies;

程式碼批量插入

新建Java的SpringBoot專案,專案地址GitHub:

pom.xml

<parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.2.5.RELEASE</version>
        <relativePath/>
    </parent>
    <groupId>com.it235</groupId>
    <artifactId>influxdb</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.influxdb</groupId>
            <artifactId>influxdb-java</artifactId>
            <version>2.15</version>
        </dependency>
    </dependencies>

application.yml

server:
  port: 8010
spring:
  influx:
    url: http://192.168.1.31:8086
    user: master
    username: master
    password: abcd1234
    database: mytest
    retention_policy: default
    retention_policy_time: 30d

Java程式碼

import lombok.extern.slf4j.Slf4j;
import org.influxdb.InfluxDB;
import org.influxdb.dto.Point;
import org.influxdb.dto.Query;
import org.influxdb.dto.QueryResult;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import java.util.Map;
import java.util.concurrent.TimeUnit;

/**
 * @Author: it235.com
 * @Date: 2020-10-10
 * @Description: 工具支援類
 */
@Slf4j
@Component
public class InfluxDBSupport {
    /**
     * 資料儲存策略
     */
    @Value("${spring.influx.retentionPolicy:}")
    private String retentionPolicy;
    /**
     * 資料儲存策略中資料儲存時間
     */
    @Value("${spring.influx.retentionPolicyTime:}")
    private String retentionPolicyTime;

    @Value("${spring.influx.database:}")
    private String database;

    /**
     * InfluxDB例項
     */
    @Autowired
    private InfluxDB influxDB;

    public InfluxDBSupport() {
        // autogen預設的資料儲存策略
        this.retentionPolicy = retentionPolicy == null || "".equals(retentionPolicy) ? "autogen" : retentionPolicy;
        this.retentionPolicyTime = retentionPolicyTime == null || "".equals(retentionPolicy) ? "30d" : retentionPolicyTime;
    }

    /**
     * 設定資料儲存策略 defalut 策略名 /database 資料庫名/ 30d 資料儲存時限30天/ 1 副本個數為1/ 結尾DEFAULT
     * 表示 設為預設的策略
     */
    public void createRetentionPolicy() {
        String command = String.format("CREATE RETENTION POLICY \"%s\" ON \"%s\" DURATION %s REPLICATION %s DEFAULT",
                retentionPolicy, database, retentionPolicyTime, 1);
        this.query(command);
    }

    /**
     * 查詢
     *
     * @param command 查詢語句
     * @return
     */
    public QueryResult query(String command) {
        return influxDB.query(new Query(command, database));
    }

    /**
     * 插入
     *
     * @param measurement 表
     * @param tags        標籤
     * @param fields      欄位
     */
    public void insert(String measurement, Map<String, String> tags, Map<String, Object> fields) {
        Point.Builder builder = Point.measurement(measurement);
        // 納秒時會出現異常資訊:partial write: points beyond retention policy dropped=1
        // builder.time(System.nanoTime(), TimeUnit.NANOSECONDS);
        builder.time(System.currentTimeMillis(), TimeUnit.MILLISECONDS);
        builder.tag(tags);
        builder.fields(fields);

        log.info("influxDB insert data:[{}]", builder.build().toString());
        influxDB.write(database, "", builder.build());
    }
import lombok.extern.slf4j.Slf4j;
import org.influxdb.dto.QueryResult;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.HashMap;
import java.util.Map;

/**
 * @Author: it235.com
 * @Date: 2020-10-10
 * @Description: 啟動主程式
 */
@Slf4j
@SpringBootApplication
public class InfluxdbDemoApplication implements CommandLineRunner {

    public static void main(String[] args) {
        SpringApplication.run(InfluxdbDemoApplication.class, args);
    }

    @Autowired
    private InfluxDBSupport influxDBSupport;

    @Override
    public void run(String... args) throws Exception {
        //插入測試
        insertTest();

        //查詢測試
        //querTest();
    }

    /**
     * 插入測試
     * @throws InterruptedException
     */
    public void insertTest() throws InterruptedException {
        Map<String, String> tagsMap = new HashMap<>();
        Map<String, Object> fieldsMap = new HashMap<>();
        System.out.println("influxDB start time :" + System.currentTimeMillis());
        int i = 0;
        for (; ; ) {
            Thread.sleep(100);
            tagsMap.put("value", String.valueOf(i % 10));
            tagsMap.put("host", "https://www.it235.com");
            tagsMap.put("region", "west" + (i % 5));
            fieldsMap.put("count", i % 5);
            influxDBSupport.insert("cpu_test", tagsMap, fieldsMap);
            i++;
        }
    }

    /**
     * 查詢測試
     */
    public void querTest(){
        QueryResult rs = influxDBSupport.query("select * from usage");
        log.info("query result => {}", rs);
        if (!rs.hasError() && !rs.getResults().isEmpty()) {
            rs.getResults().forEach(System.out::println);
        }
    }

}

啟動程式測試,觀看控制檯可以看到在批量插入資料,此時也可以去influxdb中去看看

在這裡插入圖片描述

Docker安裝Grafana整合influxDB

Grafana介紹

Grafana安裝

前面我們已經學習了Docker的安裝和相關命令,接下來,我們只講解Grafana的內容

  • 映象拉取

    $ docker pull grafana/grafana
    $ docker images
    
  • 安裝配置

    $ docker run -d -p 3000:3000 --name=it35graf grafana/grafana
    $ docker ps -a
    
  • 開放防火牆埠

    $ firewall-cmd --zone=public --add-port=3000/tcp --permanent
    $ firewall-cmd --reload
    
  • 瀏覽器訪問http://ip:3000,使用者名稱密碼預設:admin

在這裡插入圖片描述

在這裡插入圖片描述

  • 到此Grafana安裝完成

配置influxDB資料來源

在這裡插入圖片描述

在這裡插入圖片描述

在這裡插入圖片描述

在這裡插入圖片描述

在這裡插入圖片描述

建立Dashboard

dashboardGrafana種用於展示呈現的工具,我們可以將influxdb中的資料展示到dashboard

在這裡插入圖片描述

在這裡插入圖片描述

注意上述選擇的表一定是要有資料的,否則看不到效果

資料整合測試

  • 開啟程式碼批量插入程式

在這裡插入圖片描述

  • 觀看Grafana皮膚中的效果

在這裡插入圖片描述
到此Docker版的Grafana+influxdb就整合完成了。

相關文章