從greenplum叢集中移除節點,並用序列恢復的方法將刪除節點的資料重分佈到剩餘節點
昨天在把GP資料庫從單機裝的3.3.6版本升級到4.0.4版本時,老報記憶體不足的問題,考慮到4.0.4對記憶體的要求更高(目前虛擬機器配的是2G記憶體),想刪除4個節點(原來有8個節點)試試,以節省記憶體。
在網上找到一篇文章(原貼地址:),寫得相當不錯,但唯一欠缺的是在資料還原的過程中有點麻煩,特別是當資料庫很大時,它這種方案基本是不可行的,考慮到GP資料庫可以實現序列恢復,因此想透過序列恢復把刪除節點後的資料還原過來。
第一步:對刪除節點前資料的取樣
[gpadmin@mdw-1 data]$ psql -d cust_utf8
psql (8.2.13)
Type "help" for help.
cust_utf8=# select gp_segment_id,count(1) from mi_boss.tb_dim_sys_param group by 1 order by 1;
gp_segment_id | count
---------------+-------
0 | 53737
1 | 53727
2 | 53770
3 | 53654
4 | 53858
5 | 53805
6 | 53690
7 | 53756
第二步:檢視當前節點情況:
cust_utf8=# select * from gp_configuration;
content | definedprimary | dbid | isprimary | valid | hostname | port | datadir
---------+----------------+------+-----------+-------+----------+-------+-------------------
-1 | t | 1 | t | t | mdw-1 | 5432 | /data/master/gp-1
0 | t | 2 | t | t | sdw1-1 | 50001 | /data/vol1/gp0
1 | t | 3 | t | t | sdw1-2 | 50002 | /data/vol2/gp1
2 | t | 4 | t | t | sdw1-3 | 50003 | /data/vol1/gp2
3 | t | 5 | t | t | sdw1-4 | 50004 | /data/vol2/gp3
4 | t | 6 | t | t | sdw2-1 | 50005 | /data/vol1/gp4
5 | t | 7 | t | t | sdw2-2 | 50006 | /data/vol2/gp5
6 | t | 8 | t | t | sdw2-3 | 50007 | /data/vol1/gp6
7 | t | 9 | t | t | sdw2-4 | 50008 | /data/vol2/gp7
(9 rows)
cust_utf8=# select * from gp_id;
gpname | numsegments | dbid | content
--------------------+-------------+------+---------
Greenplum Database | 8 | 1 | -1
第三步:做資料庫備份
gp_dump --gp-d=/data/backup2 cust_utf8
這一步已經執行,這兒就不幫額外的操作
第四步:
4.1:以資料庫維護模式對資料庫的字典檔案進行修改,將節點6,7,8,9踢除
[gpadmin@mdw-1 ~]$ gpstart -m
20110107:09:29:04:gpstart:mdw-1:gpadmin-[INFO]:-Starting gpstart with args: '-m'
20110107:09:29:04:gpstart:mdw-1:gpadmin-[INFO]:-Gathering information and validating the environment...
20110107:09:29:04:gpstart:mdw-1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 3.3.6.1 build 1'
20110107:09:29:04:gpstart:mdw-1:gpadmin-[INFO]:-Starting Master instance in admin mode
20110107:09:29:06:gpstart:mdw-1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20110107:09:29:06:gpstart:mdw-1:gpadmin-[INFO]:-Obtaining Segment details from master...
20110107:09:29:06:gpstart:mdw-1:gpadmin-[INFO]:-Master Started...
[gpadmin@mdw-1 ~]$ psql -d cust_utf8
psql: FATAL: System was started in master-only utility mode - only utility mode connections are allowed
[gpadmin@mdw-1 ~]$ psql -d postgres
psql: FATAL: System was started in master-only utility mode - only utility mode connections are allowed
[gpadmin@mdw-1 ~]$ psql
psql: FATAL: System was started in master-only utility mode - only utility mode connections are allowed
[gpadmin@mdw-1 ~]$ PGOPTIONS=”-c gp_session_role=utility” psql
psql: FATAL: System was started in master-only utility mode - only utility mode connections are allowed
[gpadmin@mdw-1 ~]$ PGOPTIONS='-c gp_session_role=utility' psql -d cust_utf8
psql (8.2.13)
Type "help" for help.
cust_utf8=# select * from gp_configuration;
content | definedprimary | dbid | isprimary | valid | hostname | port | datadir
---------+----------------+------+-----------+-------+----------+-------+-------------------
-1 | t | 1 | t | t | mdw-1 | 5432 | /data/master/gp-1
0 | t | 2 | t | t | sdw1-1 | 50001 | /data/vol1/gp0
1 | t | 3 | t | t | sdw1-2 | 50002 | /data/vol2/gp1
2 | t | 4 | t | t | sdw1-3 | 50003 | /data/vol1/gp2
3 | t | 5 | t | t | sdw1-4 | 50004 | /data/vol2/gp3
4 | t | 6 | t | t | sdw2-1 | 50005 | /data/vol1/gp4
5 | t | 7 | t | t | sdw2-2 | 50006 | /data/vol2/gp5
6 | t | 8 | t | t | sdw2-3 | 50007 | /data/vol1/gp6
7 | t | 9 | t | t | sdw2-4 | 50008 | /data/vol2/gp7
(9 rows)
cust_utf8=# select * from gp_id;
gpname | numsegments | dbid | content
--------------------+-------------+------+---------
Greenplum Database | 8 | 1 | -1
(1 row)
cust_utf8=# delete from gp_configuration where content in(4,5,6,7);
DELETE 4
cust_utf8=# update gp_id set numsegments=4;
UPDATE 1
cust_utf8=# select * from gp_configuration;
content | definedprimary | dbid | isprimary | valid | hostname | port | datadir
---------+----------------+------+-----------+-------+----------+-------+-------------------
-1 | t | 1 | t | t | mdw-1 | 5432 | /data/master/gp-1
0 | t | 2 | t | t | sdw1-1 | 50001 | /data/vol1/gp0
1 | t | 3 | t | t | sdw1-2 | 50002 | /data/vol2/gp1
2 | t | 4 | t | t | sdw1-3 | 50003 | /data/vol1/gp2
3 | t | 5 | t | t | sdw1-4 | 50004 | /data/vol2/gp3
(5 rows)
cust_utf8=# select * from gp_id;
gpname | numsegments | dbid | content
--------------------+-------------+------+---------
Greenplum Database | 4 | 1 | -1
(1 row)
4.2:再以維護模式關閉資料庫
[gpadmin@mdw-1 ~]$ gpstop -m
20110107:09:43:27:gpstop:mdw-1:gpadmin-[INFO]:-Starting gpstop with args: '-m'
20110107:09:43:27:gpstop:mdw-1:gpadmin-[INFO]:-Gathering information and validating the environment...
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Obtaining Segment details from master...
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 3.3.6.1 build 1'
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-There are 0 connections to the database
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Master host=mdw-1
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20110107:09:43:28:gpstop:mdw-1:gpadmin-[INFO]:-Master segment instance directory=/data/master/gp-1
4.3:正常開啟資料庫試試,看是否能開啟資料庫:
[gpadmin@mdw-1 ~]$ gpstart
20110107:09:45:58:gpstart:mdw-1:gpadmin-[INFO]:-Starting gpstart with args: ''
20110107:09:45:58:gpstart:mdw-1:gpadmin-[INFO]:-Gathering information and validating the environment...
20110107:09:45:59:gpstart:mdw-1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 3.3.6.1 build 1'
20110107:09:45:59:gpstart:mdw-1:gpadmin-[INFO]:-Starting Master instance in admin mode
20110107:09:46:00:gpstart:mdw-1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20110107:09:46:00:gpstart:mdw-1:gpadmin-[INFO]:-Obtaining Segment details from master...
20110107:09:46:01:gpstart:mdw-1:gpadmin-[INFO]:-Master Started...
20110107:09:46:01:gpstart:mdw-1:gpadmin-[INFO]:-Shutting down master
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:---------------------------
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Master instance parameters
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:---------------------------
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Database = template1
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Master Port = 5432
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Master directory = /data/master/gp-1
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Master standby = Off
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:---------------------------------------
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Segment instances that will be started
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:---------------------------------------
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-Host Datadir Port Status
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-sdw1-1 /data/vol1/gp0 50001 Valid
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-sdw1-2 /data/vol2/gp1 50002 Valid
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-sdw1-3 /data/vol1/gp2 50003 Valid
20110107:09:46:04:gpstart:mdw-1:gpadmin-[INFO]:-sdw1-4 /data/vol2/gp3 50004 Valid
Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20110107:09:46:05:gpstart:mdw-1:gpadmin-[INFO]:-No standby master configured. skipping...
20110107:09:46:06:gpstart:mdw-1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.............
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Process results...
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-----------------------------------------------------
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Total processes marked as completed = 4
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Total processes marked as failed = 0
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-----------------------------------------------------
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Total instances marked invalid and bypassed = 0
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Successfully started 4 of 4 segment instances
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-----------------------------------------------------
20110107:09:46:19:gpstart:mdw-1:gpadmin-[INFO]:-Starting Master instance mdw-1 directory /data/master/gp-1
20110107:09:46:22:gpstart:mdw-1:gpadmin-[INFO]:-Command pg_ctl reports Master mdw-1 instance active
20110107:09:46:22:gpstart:mdw-1:gpadmin-[INFO]:-Database successfully started with no errors reported
OK,成功啟動。
4.4:驗證資料分佈情況:
[gpadmin@mdw-1 ~]$ psql -d cust_utf8
psql (8.2.13)
Type "help" for help.
cust_utf8=# select gp_segment_id,count(1) from mi_boss.tb_dim_sys_param group by 1 order by 1;
gp_segment_id | count
---------------+-------
0 | 53737
1 | 53727
2 | 53770
3 | 53654
(4 rows)
對比第一步的結果,發現果然是後面4個節點的資料已經被刪除了。下面用已經備份的資料對已經刪除的四個節點資料進行序列恢復。
第五步:用序列恢復將刪除的4個節點的資料追加到現有節點上
5.1:進行序列恢復:
[gpadmin@mdw-1 backup2]$ psql cust_utf8 -f gp_dump_0_6_20110106102229
SET
SET
SET
SET
SET
SET
[gpadmin@mdw-1 backup2]$ psql cust_utf8 -f gp_dump_0_7_20110106102229
SET
SET
SET
SET
SET
SET
[gpadmin@mdw-1 backup2]$ psql cust_utf8 -f gp_dump_0_8_20110106102229
SET
SET
SET
SET
SET
SET
[gpadmin@mdw-1 backup2]$ psql cust_utf8 -f gp_dump_0_9_20110106102229
SET
SET
SET
SET
SET
SET
5.2:驗證資料是否已經恢復成功:
[gpadmin@mdw-1 backup2]$ psql -d cust_utf8
psql (8.2.13)
Type "help" for help.
cust_utf8=# select gp_segment_id,count(1) from mi_boss.tb_dim_sys_param group by 1 order by 1;
gp_segment_id | count
---------------+--------
0 | 107595
1 | 107532
2 | 107460
3 | 107410
(4 rows)
53737+53727+53770+53654+53858+53805+53690+53756=107595+107532+107460+107410=429997
OK,整個測試過程圓滿完成。
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28612416/viewspace-1710919/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 新增和刪除hadoop叢集中的節點Hadoop
- ElasticSearch之叢集中的節點Elasticsearch
- 將 master 節點伺服器從 k8s 叢集中移除並重新加入AST伺服器K8S
- oracle 10g rac,刪除故障節點並新增新節點Oracle 10g
- JavaScript刪除節點自身JavaScript
- Oracle Rac 刪除節點Oracle
- TiUP線上佈署TIDB分散式資料庫叢集節點刪除TiDB分散式資料庫
- onnx 增刪改查,修改節點,刪除節點,修改input,output
- networkx基礎用法:新增節點、新增邊、刪除節點、刪除邊、計算度、賦權重
- greenplum單節點安裝
- js中dom節點刪除remove方法JSREM
- 【RAC】刪除RAC資料庫節點(六)——刪除Oracle叢集服務資料庫Oracle
- 【RAC】刪除RAC資料庫節點(七)——刪除Oracle叢集軟體資料庫Oracle
- 刪除oracle10g rac(rhel4)節點_節點Oracle
- 【RAC】刪除RAC資料庫節點(二)——刪除ASM資料庫ASM
- 【RAC】刪除RAC資料庫節點(五)——刪除ONS資料庫
- jQuery如何刪除元素節點jQuery
- RAC恢復到單例項節點上單例
- JZ18刪除表的節點
- oracle 10g rac 新增節點與刪除節點步驟Oracle 10g
- consul 多節點/單節點叢集搭建
- k8s系列--node(k8s節點介紹,新增節點,移除節點)K8S
- Oracle RAC恢復成單節點資料庫Oracle資料庫
- 【RAC】刪除RAC資料庫節點(三)——刪除監聽資料庫
- Oracle RAC 10g叢集節點刪除[zt]Oracle
- MongoDB叢集搭建(包括隱藏節點,仲裁節點)MongoDB
- redis cluster節點/新增刪除操作Redis
- mongodb副本集新增刪除節點MongoDB
- Hadoop增加和刪除節點Hadoop
- 資料庫——查詢樹形結構某節點的所有子節點、所有父節點資料庫
- Oracle叢集軟體管理-新增和刪除叢集節點Oracle
- RAC一個節點恢復另一個節點在帶庫上的備份
- RAC刪除節點失敗重啟大法解決
- 【RAC】刪除RAC資料庫節點(一)——刪除資料庫例項資料庫
- JZ76 刪除連結串列中重複的節點
- HAC叢集更改IP(單節點更改、全部節點更改)
- RAC資料庫的RMAN備份異機恢復到單節點資料庫資料庫
- mongodb叢集節點故障的切換方法MongoDB