運維除錯記錄:Ubuntu14.04下部署Opendaylight Nitrogen叢集
建議參考官網教程:Setting Up Clustering
一、實驗環境
- 主機節點系統版本: Ubuntu 14.04 (64bit)
odl@mpodl:~$ uname -a
Linux mpodl 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
- 主機節點硬體配置:單核CPU+4GB記憶體+50GB硬碟
odl@mpodl:~$ sudo lshw
description: Computer
product: Standard PC (i440FX + PIIX, 1996) ()
vendor: QEMU
version: pc-i440fx-trusty
width: 64 bits
capabilities: smbios-2.4 dmi-2.4 vsyscall32
configuration: boot=normal uuid=053D43B6-2E3C-CEA4-4C52-833DDD1749BE
*-core
description: Motherboard
physical id: 0
# CPU資訊
*-cpu
description: CPU
product: QEMU Virtual CPU version 2.0.0
vendor: Intel Corp.
physical id: 401
bus info: cpu@0
slot: CPU 1
size: 2GHz
capacity: 2GHz
width: 64 bits
# 記憶體資訊
*-memory
description: System Memory
physical id: 1000
size: 4GiB
# 硬碟資訊
*-disk
description: ATA Disk
product: QEMU HARDDISK
physical id: 0.0.0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: 0
serial: QM00001
size: 50GiB (53GB)
*-volume:0
description: EXT4 volume
vendor: Linux
physical id: 1
bus info: scsi@0:0.0.0,1
logical name: /dev/sda1
logical name: /
version: 1.0
serial: 65d80188-ddfb-4f08-8018-a0c2e1da8af3
size: 46GiB
capacity: 46GiB
*-volume:1
description: Extended partition
physical id: 2
bus info: scsi@0:0.0.0,2
logical name: /dev/sda2
size: 4093MiB
capacity: 4093MiB
- 叢集環境:3臺主機節點
Cluster_Node1: Ubuntu 14.04 -- [IP_Addr]=192.168.1.124
Cluster_Node2: Ubuntu 14.04 -- [IP_Addr]=192.168.1.125
Cluster_Node3: Ubuntu 14.04 -- [IP_Addr]=192.168.1.104
所有主機節點都連線到同一臺OpenvSwitch交換機上,且能互相PING通!!!
每臺主機節點都安裝配置有JDK 1.8,安裝方法見:Ubuntu下通過PPA方式安裝Java 8並自動配置環境變數
二、部署方法
1. 下載Opendaylight Nitrogen
在每臺主機節點上執行如下命令,下載tar.gz格式的壓縮包到使用者目錄下的ODL_N子目錄:
odl@mpodl:~/ODL_N$ wget -P . https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/karaf/0.7.2/karaf-0.7.2.tar.gz
或在官網下載,然後通過xftp軟體從本地上傳到伺服器的Opendaylight執行環境中。
2. 安裝Opendaylight Nitrogen
在每臺主機節點上執行如下命令,解壓壓縮包到ODL_N子目錄:
odl@mpodl:~/ODL_N$ tar -zxvf karaf-0.7.2.tar.gz
解壓完成,即認為Opendaylight Nitrogen安裝成功。
3. 配置Opendaylight Nitrogen叢集
在每臺主機節點下執行如下命令,完成叢集指令碼配置工作:
命令格式:sudo bash ./karaf-0.7.2/bin/configure_cluster.sh [index] [seed_node_list]
其中,[index]
為正整數,表示在[seed_node_list]
中對應哪個主機節點上配置叢集指令碼。另外,[seed_node_list]
是組成叢集的各個主機節點對應的IP地址列表,以空格或逗號隔開。
因此,具體執行命令如下:
# Cluster_Node1: IP=192.168.1.124
odl@mpodl:~/ODL_N$ sudo bash ./karaf-0.7.2/bin/configure_cluster.sh 1 192.168.1.124 192.168.1.125 192.168.1.104
# Cluster_Node2: IP=192.168.1.125
odl@mpodl:~/ODL_N$ sudo bash ./karaf-0.7.2/bin/configure_cluster.sh 2 192.168.1.124 192.168.1.125 192.168.1.104
# Cluster_Node3: IP=192.168.1.104
odl@mpodl:~/ODL_N$ sudo bash ./karaf-0.7.2/bin/configure_cluster.sh 3 192.168.1.124 192.168.1.125 192.168.1.104
當執行完之後,結果如下所示:
# Cluster_Node1: IP=192.168.1.124
odl@mpodl:~/ODL_N$ sudo bash ./karaf-0.7.2/bin/configure_cluster.sh 1 192.168.1.124 192.168.1.125 192.168.1.104
################################################
## Configure Cluster ##
################################################
Configuring unique name in akka.conf
Configuring hostname in akka.conf
Configuring data and rpc seed nodes in akka.conf
modules = [
{
name = "inventory"
namespace = "urn:opendaylight:inventory"
shard-strategy = "module"
},
{
name = "topology"
namespace = "urn:TBD:params:xml:ns:yang:network-topology"
shard-strategy = "module"
},
{
name = "toaster"
namespace = "http://netconfcentral.org/ns/toaster"
shard-strategy = "module"
}
]
Configuring replication type in module-shards.conf
################################################
## NOTE: Manually restart controller to ##
## apply configuration. ##
################################################
備註:
(1)執行如上命令後,會在karaf-0.7.2/configuration
目錄下生成initial
子目錄,結果如下所示:
odl@mpodl:~/ODL_N$ ls ./karaf-0.7.2/configuration/
context.xml factory initial logback.xml tomcat-logging.properties tomcat-server.xml
odl@mpodl:~/ODL_N$ ls ./karaf-0.7.2/configuration/initial/
akka.conf modules.conf module-shards.conf
可見,叢集配置指令碼在initial
子目錄下生成了akka.conf
、modules.conf
和module-shards.conf
三個配置檔案。
(2)檢視第一臺Ubuntu主機節點的akka.conf
檔案內容,具體如下所示:
odl@mpodl:~/ODL_N$ cat ./karaf-0.7.2/configuration/initial/akka.conf
odl-cluster-data {
akka {
remote {
artery {
enabled = off
canonical.hostname = "192.168.1.124" # 本機IP地址
canonical.port = 2550
}
netty.tcp {
hostname = "192.168.1.124" # 本機IP地址
port = 2550
}
# when under load we might trip a false positive on the failure detector
# transport-failure-detector {
# heartbeat-interval = 4 s
# acceptable-heartbeat-pause = 16s
# }
}
cluster {
# Remove ".tcp" when using artery.
# 叢集節點列表
seed-nodes = ["akka.tcp://opendaylight-cluster-data@192.168.1.124:2550",
"akka.tcp://opendaylight-cluster-data@192.168.1.125:2550",
"akka.tcp://opendaylight-cluster-data@192.168.1.104:2550"]
roles = ["member-1"]
}
persistence {
# By default the snapshots/journal directories live in KARAF_HOME. You can choose to put it somewhere else by
# modifying the following two properties. The directory location specified may be a relative or absolute path.
# The relative path is always relative to KARAF_HOME.
# snapshot-store.local.dir = "target/snapshots"
# journal.leveldb.dir = "target/journal"
journal {
leveldb {
# Set native = off to use a Java-only implementation of leveldb.
# Note that the Java-only version is not currently considered by Akka to be production quality.
# native = off
}
}
}
}
}
同樣地,第二臺和第三臺Ubuntu主機節點的akka.conf
檔案內容相似,只是對應的IP地址不同。
(3)檢視第一臺Ubuntu主機節點的module-shards.conf
檔案內容,具體如下所示:
odl@mpodl:~/ODL_N$ cat ./karaf-0.7.2/configuration/initial/module-shards.conf
module-shards = [
{
name = "default"
shards = [
{
name = "default"
replicas = ["member-1",
"member-2",
"member-3"]
}
]
},
{
name = "inventory"
shards = [
{
name="inventory"
replicas = ["member-1",
"member-2",
"member-3"]
}
]
},
{
name = "topology"
shards = [
{
name="topology"
replicas = ["member-1",
"member-2",
"member-3"]
}
]
},
{
name = "toaster"
shards = [
{
name="toaster"
replicas = ["member-1",
"member-2",
"member-3"]
}
]
}
]
同樣地,第二臺和第三臺Ubuntu主機節點的module-shards.conf
檔案內容完全相同。
4. 啟動Opendaylight Nitrogen叢集
在每臺主機節點下執行如下命令,完成Opendaylight節點啟動工作:
odl@mpodl:~/ODL_N$ ./karaf-0.7.2/bin/karaf
karaf: JAVA_HOME not set; results may vary
Apache Karaf starting up. Press Enter to open the shell now...
100% [========================================================================]
Karaf started in 8s. Bundle stats: 208 active, 209 total
________ ________ .__ .__ .__ __
\_____ \ ______ ____ ____ \______ \ _____ ___.__.| | |__| ____ | |___/ |_
/ | \\____ \_/ __ \ / \ | | \\__ \< | || | | |/ ___\| | \ __\
/ | \ |_> > ___/| | \| ` \/ __ \\___ || |_| / /_/ > Y \ |
\_______ / __/ \___ >___| /_______ (____ / ____||____/__\___ /|___| /__|
\/|__| \/ \/ \/ \/\/ /_____/ \/
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown OpenDaylight.
opendaylight-user@root>
然後,執行命令feature:list -i
檢查odl-mdsal-clustering
是否處於已安裝狀態。如果沒有安裝,則執行命令feature:install odl-mdsal-clustering
完成對應Feature的安裝。
opendaylight-user@root>feature:list -i
Name | Version | Required | State | Repository | Description
----------------------------------------------------------------------------------------------------------------------------------------------------
odl-mdsal-broker | 1.6.2 | | Started | odl-mdsal-1.6.2 | odl-mdsal-broker
odl-mdsal-clustering | 1.6.2 | x | Started | odl-mdsal-clustering | odl-mdsal-clustering
5. 檢查Opendaylight Nitrogen叢集是否建立
在每臺主機節點下執行如下命令,獲取主機節點的角色資訊(Leader/Follower):
opendaylight-user@root> ld | grep clustering
於是,在主機節點一,輸出如下:
2018-03-21 13:22:45,688 | INFO | d-dispatcher-125 | Shard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-prefix-configuration-shard-config (Candidate): Starting new election term 21
2018-03-21 13:22:45,741 | INFO | d-dispatcher-125 | Shard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-prefix-configuration-shard-config (Candidate) :- Switching from behavior Candidate to Leader, election term: 21
2018-03-21 13:22:45,742 | INFO | ult-dispatcher-5 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-1-shard-prefix-configuration-shard-config , received role change from Candidate to Leader
2018-03-21 13:22:45,821 | INFO | d-dispatcher-121 | Shard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-prefix-configuration-shard-operational (Candidate): Starting new election term 21
2018-03-21 13:22:45,858 | INFO | d-dispatcher-125 | Shard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-prefix-configuration-shard-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 21
2018-03-21 13:22:45,858 | INFO | ult-dispatcher-5 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-1-shard-prefix-configuration-shard-operational , received role change from Candidate to Leader
2018-03-21 13:22:45,872 | INFO | d-dispatcher-125 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-entity-ownership-operational (Candidate): Starting new election term 21
2018-03-21 13:22:45,912 | INFO | d-dispatcher-145 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | member-1-shard-entity-ownership-operational (Candidate) :- Switching from behavior Candidate to Leader, election term: 21
2018-03-21 13:22:45,921 | INFO | lt-dispatcher-21 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-1-shard-entity-ownership-operational , received role change from Candidate to Leader
在主機節點二,輸出如下:
2018-03-21 13:22:36,286 | INFO | ult-dispatcher-4 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-2-shard-prefix-configuration-shard-config , received role change from null to Follower
2018-03-21 13:22:36,287 | INFO | ult-dispatcher-4 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-2-shard-prefix-configuration-shard-operational , received role change from null to Follower
2018-03-21 13:22:36,287 | INFO | ult-dispatcher-4 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-2-shard-prefix-configuration-shard-config , registered listener akka://opendaylight-cluster-data/user/shardmanager-config
2018-03-21 13:22:36,287 | INFO | ult-dispatcher-4 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-2-shard-prefix-configuration-shard-operational , registered listener akka://opendaylight-cluster-data/user/shardmanager-operational
2018-03-21 13:22:36,305 | INFO | rd-dispatcher-32 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | Starting recovery for member-2-shard-entity-ownership-operational with journal batch size 1
2018-03-21 13:22:36,313 | INFO | rd-dispatcher-38 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | Recovery completed - Switching actor to Follower - Persistence Id = member-2-shard-entity-ownership-operational Last index in log = -1, snapshotIndex = -1, snapshotTerm = -1, journal-size = 0
2018-03-21 13:22:36,317 | INFO | ult-dispatcher-2 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-2-shard-entity-ownership-operational , received role change from null to Follower
在主機節點三,輸出如下:
2018-03-21 13:22:39,418 | INFO | ult-dispatcher-6 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-3-shard-prefix-configuration-shard-operational , received role change from null to Follower
2018-03-21 13:22:39,418 | INFO | ult-dispatcher-6 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-3-shard-prefix-configuration-shard-config , received role change from null to Follower
2018-03-21 13:22:39,418 | INFO | ult-dispatcher-6 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-3-shard-prefix-configuration-shard-operational , registered listener akka://opendaylight-cluster-data/user/shardmanager-operational
2018-03-21 13:22:39,418 | INFO | ult-dispatcher-6 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-3-shard-prefix-configuration-shard-config , registered listener akka://opendaylight-cluster-data/user/shardmanager-config
2018-03-21 13:22:39,466 | INFO | rd-dispatcher-23 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | Starting recovery for member-3-shard-entity-ownership-operational with journal batch size 1
2018-03-21 13:22:39,470 | INFO | rd-dispatcher-23 | EntityOwnershipShard | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | Recovery completed - Switching actor to Follower - Persistence Id = member-3-shard-entity-ownership-operational Last index in log = -1, snapshotIndex = -1, snapshotTerm = -1, journal-size = 0
2018-03-21 13:22:39,473 | INFO | lt-dispatcher-31 | RoleChangeNotifier | 120 - org.opendaylight.controller.sal-clustering-commons - 1.6.2 | RoleChangeNotifier for member-3-shard-entity-ownership-operational , received role change from null to Follower
可以看出,第一臺Ubuntu主機節點成為Leader,其它兩臺Ubuntu主機節點成為 Follower ,叢集配置成功。但是,從日誌也可以看出,對於不同的Shard,存在不同的叢集關係。
三、小結
本文詳細介紹了Opendaylight Nitrogen叢集的搭建指南,後續碰到問題將繼續補充。
相關文章
- ElasticSearch 叢集的規劃部署與運維Elasticsearch運維
- ProxySQL Cluster 高可用叢集環境部署記錄SQL
- 徹底搞懂 etcd 系列文章(三):etcd 叢集運維部署運維
- Centos7下GlusterFS分散式儲存叢集環境部署記錄CentOS分散式
- RabbitMQ叢集運維實踐MQ運維
- redis哨兵,叢集和運維Redis運維
- docker筆記41-ceph叢集的日常運維Docker筆記運維
- ELK批量刪除索引及叢集相關操作記錄索引
- mongos分片叢集管理和運維Go運維
- GBase XDM(單機/分片叢集)資料庫 刪除記錄資料庫
- kafka 基礎知識梳理及叢集環境部署記錄Kafka
- Elasticsearch叢集運維相關知識Elasticsearch運維
- Tidb 運維--叢集檢視的使用TiDB運維
- GDB除錯使用記錄除錯
- 安全叢集訪問非安全叢集問題記錄
- Centos7下ELK+Redis日誌分析平臺的叢集環境部署記錄CentOSRedis
- 「實戰篇」開源專案docker化運維部署-搭建mysql叢集(四)Docker運維MySql
- 利用 Kubeadm部署 Kubernetes 1.13.1 叢集實踐錄
- 400+節點的 Elasticsearch 叢集運維Elasticsearch運維
- 400+ 節點的 Elasticsearch 叢集運維Elasticsearch運維
- Python 學習除錯記錄Python除錯
- GitHub學習除錯記錄Github除錯
- Supervisor 安裝除錯記錄除錯
- 部署分片叢集
- 如何運維多叢集資料庫?58 同城 NebulaGraph Database 運維實踐運維資料庫Database
- Oracle RAC日常運維-NetworkManager導致叢集故障Oracle運維
- 阿里超大規模 Flink 叢集運維實踐阿里運維
- 阿里雲註冊叢集+Prometheus 解決多雲容器叢集運維痛點阿里Prometheus運維
- Shell除錯有什麼技巧?Linux運維除錯Linux運維
- docker部署mysql叢集DockerMySql
- Docker部署ElasticSearch叢集DockerElasticsearch
- Liunx常用運維命令整理記錄運維
- Linux運維學習記錄07Linux運維
- Linux運維學習記錄02Linux運維
- 插曲:Kafka的叢集部署實踐及運維相關Kafka運維
- FCoE測試重啟除錯記錄除錯
- Redis學習筆記(十八) 叢集(下)Redis筆記
- rabbitmq 原理、叢集、基本運維操作、常見故障處理MQ運維