TiUP線上佈署TIDB分散式資料庫叢集節點刪除
檢查當前叢集節點狀態:
tiup cluster display hdcluster Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster Cluster type: tidb Cluster name: hdcluster Cluster version: v4.0.8 SSH type: builtin Dashboard URL: ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.254.91:9093 alertmanager 172.16.254.91 9093/9094 linux/x86_64 Up /tidb/tidb-data/alertmanager-9093 /tidb/tidb-deploy/alertmanager-9093 172.16.254.91:3000 grafana 172.16.254.91 3000 linux/x86_64 Up - /tidb/tidb-deploy/grafana-3000 172.16.254.101:2379 pd 172.16.254.101 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.102:2379 pd 172.16.254.102 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.92:2379 pd 172.16.254.92 2379/2380 linux/x86_64 Up|L /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.93:2379 pd 172.16.254.93 2379/2380 linux/x86_64 Up|UI /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.94:2379 pd 172.16.254.94 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.91:9090 prometheus 172.16.254.91 9090 linux/x86_64 Up /tidb/tidb-data/prometheus-9090 /tidb/tidb-deploy/prometheus-9090 172.16.254.103:4000 tidb 172.16.254.103 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.95:4000 tidb 172.16.254.95 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.96:4000 tidb 172.16.254.96 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.97:4000 tidb 172.16.254.97 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.91:9000 tiflash 172.16.254.91 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb/tidb-data/tiflash-9000 /tidb/tidb-deploy/tiflash-9000 172.16.254.100:20160 tikv 172.16.254.100 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.104:20160 tikv 172.16.254.104 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.98:20160 tikv 172.16.254.98 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.99:20160 tikv 172.16.254.99 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 Total nodes: 17
依次刪除上篇文章中新增的節點:及2臺pd_server,1臺tidb_server,1臺tikv_server
刪除tikv_servers:
tiup cluster scale-in hdcluster --node 172.16.254.104:20160 Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.104:20160 This operation will delete the 172.16.254.104:20160 nodes in `hdcluster` and all their data. Do you want to continue? [y/N]: y Scale-in nodes... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.254.100 + [Parallel] - UserSSH: user=tidb, host=172.16.254.92 + [Parallel] - UserSSH: user=tidb, host=172.16.254.93 + [Parallel] - UserSSH: user=tidb, host=172.16.254.94 + [Parallel] - UserSSH: user=tidb, host=172.16.254.101 + [Parallel] - UserSSH: user=tidb, host=172.16.254.102 + [Parallel] - UserSSH: user=tidb, host=172.16.254.98 + [Parallel] - UserSSH: user=tidb, host=172.16.254.99 + [Parallel] - UserSSH: user=tidb, host=172.16.254.97 + [Parallel] - UserSSH: user=tidb, host=172.16.254.104 + [Parallel] - UserSSH: user=tidb, host=172.16.254.95 + [Parallel] - UserSSH: user=tidb, host=172.16.254.96 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.103 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.104:20160] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]} The component `tikv` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it + [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`''` + [ Serial ] - UpdateTopology: cluster=hdcluster + Refresh instance configs - Regenerate config pd -> 172.16.254.92:2379 ... Done - Regenerate config pd -> 172.16.254.93:2379 ... Done - Regenerate config pd -> 172.16.254.94:2379 ... Done - Regenerate config pd -> 172.16.254.101:2379 ... Done - Regenerate config pd -> 172.16.254.102:2379 ... Done - Regenerate config tikv -> 172.16.254.98:20160 ... Done - Regenerate config tikv -> 172.16.254.99:20160 ... Done - Regenerate config tikv -> 172.16.254.100:20160 ... Done - Regenerate config tidb -> 172.16.254.95:4000 ... Done - Regenerate config tidb -> 172.16.254.96:4000 ... Done - Regenerate config tidb -> 172.16.254.97:4000 ... Done - Regenerate config tidb -> 172.16.254.103:4000 ... Done - Regenerate config tiflash -> 172.16.254.91:9000 ... Done - Regenerate config prometheus -> 172.16.254.91:9090 ... Done - Regenerate config grafana -> 172.16.254.91:3000 ... Done - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done + [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service Scaled cluster `hdcluster` in successfully
刪除pd_servers:
tiup cluster scale-in hdcluster --node 172.16.254.101:2379 Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.101:2379 This operation will delete the 172.16.254.101:2379 nodes in `hdcluster` and all their data. Do you want to continue? [y/N]: y Scale-in nodes... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.92 + [Parallel] - UserSSH: user=tidb, host=172.16.254.93 + [Parallel] - UserSSH: user=tidb, host=172.16.254.94 + [Parallel] - UserSSH: user=tidb, host=172.16.254.101 + [Parallel] - UserSSH: user=tidb, host=172.16.254.102 + [Parallel] - UserSSH: user=tidb, host=172.16.254.98 + [Parallel] - UserSSH: user=tidb, host=172.16.254.99 + [Parallel] - UserSSH: user=tidb, host=172.16.254.100 + [Parallel] - UserSSH: user=tidb, host=172.16.254.104 + [Parallel] - UserSSH: user=tidb, host=172.16.254.95 + [Parallel] - UserSSH: user=tidb, host=172.16.254.96 + [Parallel] - UserSSH: user=tidb, host=172.16.254.97 + [Parallel] - UserSSH: user=tidb, host=172.16.254.103 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.101:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]} Stopping component pd Stopping instance 172.16.254.101 Stop pd 172.16.254.101:2379 success Destroying component pd Destroying instance 172.16.254.101 Destroy 172.16.254.101 success - Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service] Stopping component node_exporter Stopping component blackbox_exporter Destroying monitored 172.16.254.101 Destroying instance 172.16.254.101 Destroy monitored on 172.16.254.101 success Delete public key 172.16.254.101 Delete public key 172.16.254.101 success + [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.101:2379'` + [ Serial ] - UpdateTopology: cluster=hdcluster + Refresh instance configs - Regenerate config pd -> 172.16.254.92:2379 ... Done - Regenerate config pd -> 172.16.254.93:2379 ... Done - Regenerate config pd -> 172.16.254.94:2379 ... Done - Regenerate config pd -> 172.16.254.102:2379 ... Done - Regenerate config tikv -> 172.16.254.98:20160 ... Done - Regenerate config tikv -> 172.16.254.99:20160 ... Done - Regenerate config tikv -> 172.16.254.100:20160 ... Done - Regenerate config tikv -> 172.16.254.104:20160 ... Done - Regenerate config tidb -> 172.16.254.95:4000 ... Done - Regenerate config tidb -> 172.16.254.96:4000 ... Done - Regenerate config tidb -> 172.16.254.97:4000 ... Done - Regenerate config tidb -> 172.16.254.103:4000 ... Done - Regenerate config tiflash -> 172.16.254.91:9000 ... Done - Regenerate config prometheus -> 172.16.254.91:9090 ... Done - Regenerate config grafana -> 172.16.254.91:3000 ... Done - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done + [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service Scaled cluster `hdcluster` in successfully
tiup cluster scale-in hdcluster --node 172.16.254.102:2379 Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.102:2379 This operation will delete the 172.16.254.102:2379 nodes in `hdcluster` and all their data. Do you want to continue? [y/N]: y Scale-in nodes... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.92 + [Parallel] - UserSSH: user=tidb, host=172.16.254.93 + [Parallel] - UserSSH: user=tidb, host=172.16.254.94 + [Parallel] - UserSSH: user=tidb, host=172.16.254.102 + [Parallel] - UserSSH: user=tidb, host=172.16.254.98 + [Parallel] - UserSSH: user=tidb, host=172.16.254.99 + [Parallel] - UserSSH: user=tidb, host=172.16.254.100 + [Parallel] - UserSSH: user=tidb, host=172.16.254.104 + [Parallel] - UserSSH: user=tidb, host=172.16.254.95 + [Parallel] - UserSSH: user=tidb, host=172.16.254.96 + [Parallel] - UserSSH: user=tidb, host=172.16.254.97 + [Parallel] - UserSSH: user=tidb, host=172.16.254.103 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.102:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]} Stopping component pd Stopping instance 172.16.254.102 Stop pd 172.16.254.102:2379 success Destroying component pd Destroying instance 172.16.254.102 Destroy 172.16.254.102 success - Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service] Stopping component node_exporter Stopping component blackbox_exporter Destroying monitored 172.16.254.102 Destroying instance 172.16.254.102 Destroy monitored on 172.16.254.102 success Delete public key 172.16.254.102 Delete public key 172.16.254.102 success + [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.102:2379'` + [ Serial ] - UpdateTopology: cluster=hdcluster + Refresh instance configs - Regenerate config pd -> 172.16.254.92:2379 ... Done - Regenerate config pd -> 172.16.254.93:2379 ... Done - Regenerate config pd -> 172.16.254.94:2379 ... Done - Regenerate config tikv -> 172.16.254.98:20160 ... Done - Regenerate config tikv -> 172.16.254.99:20160 ... Done - Regenerate config tikv -> 172.16.254.100:20160 ... Done - Regenerate config tikv -> 172.16.254.104:20160 ... Done - Regenerate config tidb -> 172.16.254.95:4000 ... Done - Regenerate config tidb -> 172.16.254.96:4000 ... Done - Regenerate config tidb -> 172.16.254.97:4000 ... Done - Regenerate config tidb -> 172.16.254.103:4000 ... Done - Regenerate config tiflash -> 172.16.254.91:9000 ... Done - Regenerate config prometheus -> 172.16.254.91:9090 ... Done - Regenerate config grafana -> 172.16.254.91:3000 ... Done - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done + [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service Scaled cluster `hdcluster` in successfully
刪除tidb_servers:
tiup cluster scale-in hdcluster --node 172.16.254.103:4000 Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.103:4000 This operation will delete the 172.16.254.103:4000 nodes in `hdcluster` and all their data. Do you want to continue? [y/N]: y Scale-in nodes... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.92 + [Parallel] - UserSSH: user=tidb, host=172.16.254.93 + [Parallel] - UserSSH: user=tidb, host=172.16.254.94 + [Parallel] - UserSSH: user=tidb, host=172.16.254.98 + [Parallel] - UserSSH: user=tidb, host=172.16.254.99 + [Parallel] - UserSSH: user=tidb, host=172.16.254.100 + [Parallel] - UserSSH: user=tidb, host=172.16.254.104 + [Parallel] - UserSSH: user=tidb, host=172.16.254.95 + [Parallel] - UserSSH: user=tidb, host=172.16.254.96 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.91 + [Parallel] - UserSSH: user=tidb, host=172.16.254.97 + [Parallel] - UserSSH: user=tidb, host=172.16.254.103 + [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.103:4000] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]} Stopping component tidb Stopping instance 172.16.254.103 Stop tidb 172.16.254.103:4000 success Destroying component tidb Destroying instance 172.16.254.103 Destroy 172.16.254.103 success - Destroy tidb paths: [/tidb/tidb-deploy/tidb-4000 /etc/systemd/system/tidb-4000.service /tidb/tidb-deploy/tidb-4000/log] Stopping component node_exporter Stopping component blackbox_exporter Destroying monitored 172.16.254.103 Destroying instance 172.16.254.103 Destroy monitored on 172.16.254.103 success Delete public key 172.16.254.103 Delete public key 172.16.254.103 success + [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.103:4000'` + [ Serial ] - UpdateTopology: cluster=hdcluster + Refresh instance configs - Regenerate config pd -> 172.16.254.92:2379 ... Done - Regenerate config pd -> 172.16.254.93:2379 ... Done - Regenerate config pd -> 172.16.254.94:2379 ... Done - Regenerate config tikv -> 172.16.254.98:20160 ... Done - Regenerate config tikv -> 172.16.254.99:20160 ... Done - Regenerate config tikv -> 172.16.254.100:20160 ... Done - Regenerate config tikv -> 172.16.254.104:20160 ... Done - Regenerate config tidb -> 172.16.254.95:4000 ... Done - Regenerate config tidb -> 172.16.254.96:4000 ... Done - Regenerate config tidb -> 172.16.254.97:4000 ... Done - Regenerate config tiflash -> 172.16.254.91:9000 ... Done - Regenerate config prometheus -> 172.16.254.91:9090 ... Done - Regenerate config grafana -> 172.16.254.91:3000 ... Done - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done + [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service Scaled cluster `hdcluster` in successfully
檢查叢集節點狀態:
tiup cluster display hdcluster Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster Cluster type: tidb Cluster name: hdcluster Cluster version: v4.0.8 SSH type: builtin Dashboard URL: ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.254.91:9093 alertmanager 172.16.254.91 9093/9094 linux/x86_64 Up /tidb/tidb-data/alertmanager-9093 /tidb/tidb-deploy/alertmanager-9093 172.16.254.91:3000 grafana 172.16.254.91 3000 linux/x86_64 Up - /tidb/tidb-deploy/grafana-3000 172.16.254.92:2379 pd 172.16.254.92 2379/2380 linux/x86_64 Up|L /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.93:2379 pd 172.16.254.93 2379/2380 linux/x86_64 Up|UI /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.94:2379 pd 172.16.254.94 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379 172.16.254.91:9090 prometheus 172.16.254.91 9090 linux/x86_64 Up /tidb/tidb-data/prometheus-9090 /tidb/tidb-deploy/prometheus-9090 172.16.254.95:4000 tidb 172.16.254.95 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.96:4000 tidb 172.16.254.96 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.97:4000 tidb 172.16.254.97 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000 172.16.254.91:9000 tiflash 172.16.254.91 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb/tidb-data/tiflash-9000 /tidb/tidb-deploy/tiflash-9000 172.16.254.100:20160 tikv 172.16.254.100 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.104:20160 tikv 172.16.254.104 20160/20180 linux/x86_64 Tombstone /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.98:20160 tikv 172.16.254.98 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 172.16.254.99:20160 tikv 172.16.254.99 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160 Total nodes: 14 There are some nodes can be pruned: Nodes: [172.16.254.104:20160] You can destroy them with the command: `tiup cluster prune hdcluster`
縮容完畢。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/30135314/viewspace-2758372/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Oracle叢集軟體管理-新增和刪除叢集節點Oracle
- hadoop叢集搭建——單節點(偽分散式)Hadoop分散式
- 從零部署TiDB叢集TiDB
- kafka系列二:多節點分散式叢集搭建Kafka分散式
- k8s叢集刪除和新增node節點K8S
- GBase XDM(單機/分片叢集)資料庫 刪除記錄資料庫
- 分散式文件儲存資料庫之MongoDB分片叢集分散式資料庫MongoDB
- TiDB at 豐巢:嚐鮮分散式資料庫TiDB分散式資料庫
- 崑崙分散式資料庫儲存叢集 Fullsync 機制分散式資料庫
- 用 Docker swarm 快速部署分散式圖資料庫 Nebula Graph 叢集DockerSwarm分散式資料庫
- 聊聊分散式資料庫中單節點故障的影響分散式資料庫
- ElasticSearch 分散式叢集Elasticsearch分散式
- LNMP 分散式叢集(三):MySQL主從資料庫伺服器的搭建LNMP分散式MySql資料庫伺服器
- 如何對分散式 NewSQL 資料庫 TiDB 進行效能調優分散式SQL資料庫TiDB
- Github上開源分散式資料庫TiDB的星級已達到 30,000Github分散式資料庫TiDB
- 臥槽,線上資料刪錯了,差點被老闆開除
- 設定gbase叢集節點離線狀態
- indexedDB 刪除資料庫Index資料庫
- 2.11 刪除資料庫資料庫
- 從庫轉換成PXC叢集的節點
- CentOS7 上搭建多節點 Elasticsearch叢集CentOSElasticsearch
- elasticsearch(三)---分散式叢集Elasticsearch分散式
- HA分散式叢集搭建分散式
- HDFS分散式叢集搭建分散式
- golang分散式與叢集Golang分散式
- hadoop分散式叢集搭建Hadoop分散式
- 刪除k8s叢集K8S
- 分散式 PostgreSQL 叢集(Citus)官方示例 - 時間序列資料分散式SQL
- consul 多節點/單節點叢集搭建
- TiDB 分散式資料庫在轉轉公司的應用實踐TiDB分散式資料庫
- Facebook 開源 Golang 實體框架 Ent 支援分散式資料庫 TiDBGolang框架分散式資料庫TiDB
- 分散式 PostgreSQL 叢集(Citus),分散式表中的分佈列選擇最佳實踐分散式SQL
- 刪除當前資料庫連線使用者資料庫
- 讀懂這一篇,叢集節點不下線
- 雲原生 PostgreSQL - CrunchyData PGO 教程:建立、連線、刪除 Postgres 叢集SQLGo
- 如何在微服務分散式架構中刪除資料? - bennorthrop微服務分散式架構
- 資料來源管理 | 分散式NoSQL系統,Cassandra叢集管理分散式SQL
- Citus 分散式 PostgreSQL 叢集 - SQL Reference(攝取、修改資料 DML)分散式SQL