POSTGRESQL10.3原始碼安裝主從搭建 pgpoolii + keepalive 高可用(未成功,實驗中)
一、下載POSTGRESQL原始碼安裝包及主機配置
虛擬機器環境
node1 192.168.159.151
node2 192.168.159.152
作業系統為redhat6.5
資料庫為postgresql10.3
兩個節點均配置/etc/hosts
vi /etc/hosts
node1 192.168.159.151
node2 192.168.159.152
二、編譯安裝
(1)建立postgres使用者
useradd -m -r -s /bin/bash -u 5432 postgres
(2)安裝相關依賴包
yum install gettext gcc make perl python perl-ExtUtils-Embed readline-devel zlib-devel openssl-devel libxml2-devel cmake gcc-c++ libxslt-devel openldap-devel pam-devel python-devel cyrus-sasl-devel libgcrypt-devel libgpg-error-devel libstdc++-devel
(3)配置POSTGRES
./configure --prefix=/opt/postgresql-10.3 --with-segsize=8 --with-wal-segsize=64 --with-wal-blocksize=16 --with-blocksize=16 --with-libedit-preferred --with-perl --with-python --with-openssl --with-libxml --with-libxslt --enable-thread-safety --enable-nls=zh_CN
最後幾行出現以下黃色輸出即配置正確,否則根據報錯提示繼續安裝依賴包
configure: using CPPFLAGS= -D_GNU_SOURCE -I/usr/include/libxml2
configure: using LDFLAGS= -Wl,--as-needed
configure: creating ./config.status
config.status: creating GNUmakefile
config.status: creating src/Makefile.global
config.status: creating src/include/pg_config.h
config.status: creating src/include/pg_config_ext.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s
config.status: linking src/backend/port/dynloader/linux.c to src/backend/port/dynloader.c
config.status: linking src/backend/port/posix_sema.c to src/backend/port/pg_sema.c
config.status: linking src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c
config.status: linking src/backend/port/dynloader/linux.h to src/include/dynloader.h
config.status: linking src/include/port/linux.h to src/include/pg_config_os.h
config.status: linking src/makefiles/Makefile.linux to src/Makefile.port
(4)編譯
make && make install
最後幾行出現以下黃色輸出即配置正確
make[1]: Leaving directory `/opt/postgresql-10.3/src'
make -C config install
make[1]: Entering directory `/opt/postgresql-10.3/config'
/bin/mkdir -p '/opt/postgresql-10.3/lib/pgxs/config'
/usr/bin/install -c -m 755 ./install-sh '/opt/postgresql-10.3/lib/pgxs/config/install-sh'
/usr/bin/install -c -m 755 ./missing '/opt/postgresql-10.3/lib/pgxs/config/missing'
make[1]: Leaving directory `/opt/postgresql-10.3/config'
PostgreSQL installation complete.
(5)安裝
make world && make install -world
最後幾行出現以下黃色輸出即配置正確
make[1]: Leaving directory `/opt/postgresql-10.3/src'
make -C config install
make[1]: Entering directory `/opt/postgresql-10.3/config'
/bin/mkdir -p '/opt/postgresql-10.3/lib/pgxs/config'
/usr/bin/install -c -m 755 ./install-sh '/opt/postgresql-10.3/lib/pgxs/config/install-sh'
/usr/bin/install -c -m 755 ./missing '/opt/postgresql-10.3/lib/pgxs/config/missing'
make[1]: Leaving directory `/opt/postgresql-10.3/config'
PostgreSQL installation complete.
make: Leaving directory `/opt/postgresql-10.3'
(6)建立相關目錄及配置環境變數
mkdir -p /data/pgdata/serverlog
mkdir /data/pg
su - postgres
vi .bash_profile (刪除原來的所有,以下黃色部分直接複製貼上)
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# postgres
#PostgreSQL埠
PGPORT=5432
#PostgreSQL資料目錄
PGDATA=/data/pgdata
export PGPORT PGDATA
#所使用的語言
export LANG=zh_CN.utf8
#PostgreSQL 安裝目錄
export PGHOME=/data/pg
#PostgreSQL 連線庫檔案
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export DATE=`date +"%Y%m%d%H%M"`
#將PostgreSQL的命令列新增到 PATH 環境變數
export PATH=$PGHOME/bin:$PATH
#PostgreSQL的 man 手冊
export MANPATH=$PGHOME/share/man:$MANPATH
#PostgreSQL的預設使用者
export PGUSER=postgres
#PostgreSQL預設主機地址
export PGHOST=127.0.0.1
#預設的資料庫名
export PGDATABASE=postgres
#定義日誌存放目錄
PGLOG="$PGDATA/serverlog"
source .bash_profile
(7)初始化資料庫
#執行資料庫初始化指令碼
root使用者登入
chown -R postgres.postgres /data/
su - postgres
$/opt/postgresql-10.3/bin/initdb --encoding=utf8 -D /data/pg/data
警告:為本地連線啟動了 "trust" 認證.
你可以透過編輯 pg_hba.conf 更改或你下次
行 initdb 時使用 -A或者--auth-local和--auth-host選項.
Success. You can now start the database server using:
啟動資料庫
su - postgres
/opt/postgresql-10.3/bin/pg_ctl -D /data/pg/data -l logfile start
(8)相關命令複製
root使用者
mkdir /data/pg/bin
cp /opt/postgresql-10.3/bin/*
/data/pg/bin
chown -R postgres.postgres
/data/pg/bin
三、postgresql主從搭建
1、主庫配置
su - postgres
host all replica 192.168.159.0/24 trust
(3)主庫配置檔案修改以下幾項,其他不變
vi /data/pg/ data/ postgresql.conf
listen_addresses = '*'
wal_level = hot_standby #熱備模式
max_wal_senders= 6 #可以設定最多幾個流複製連結,差不多有幾個從,就設定多少
wal_keep_segments = 10240 #重要配置
wal_send_timeout = 60s
max_connections = 512 #從庫的 max_connections要大於主庫
archive_mode = on #允許歸檔
archive_command = 'cp %p /data/pg/data/archive/%f' #根據實際情況設定
checkpoint_timeout = 30min
max_wal_size = 3GB
min_wal_size = 64MB
mkdir /data/pg/data/archive
vi /data/pg /data/ postgresql.conf
listen_addresses ='*'
(5)
啟動備庫
/opt/postgresql-10.3/bin/pg_ctl -D /data/pg/data/ -l logfile start
cd /data/pg/
發現登陸postgres時出現以下問題
-bash-4.1$
root使用者執行
cp /etc/skel/.bash* /var/lib/pgsql/
再次登陸即可變成
[postgres@node1 ~]$
4、手動主備切換
檢視logfile日誌,出現以下資訊即啟用
2018-06-04 21:11:01.137 PDT [12818] 日誌: 已找到觸發器檔案:/data/pg/data/trigger.kenyon
2018-06-04 21:11:01.148 PDT [12818] 日誌: redo 在 0/C02A390 完成
2018-06-04 21:11:01.172 PDT [12818] 日誌: 已選擇的新時間線ID:2
2018-06-04 21:11:05.442 PDT [12818] 日誌: 歸檔恢復完畢
2018-06-04 21:11:05.568 PDT [12817] 日誌: 資料庫系統準備接受連線
(4)修改原來主庫的配置檔案
vi /data/pg/data/postgresql.conf
max_connections = 1500 #從庫值要比主庫值大
(5)
啟用原來的主庫,讓其轉變成從庫 (在原來的主庫上執行
192.168.159.151
)
vi /data/pg/data/
recover
y.conf
vi /data/pg/data/
pg_hba.conf
啟動原來的主庫即現在的從庫(
192.168.159.151
)
/opt/postgresql-10.3/bin/pg_ctl -D /data/pg/data/ -l logfile start
檢視現在的從庫logfile日誌發現報錯資訊
2018-06-05 00:08:00.326 PDT [9729] 詳細資訊: WAL結束時,到了時間線1和地址0/C02A400.
2018-06-05 00:08:00.327 PDT [9725] 日誌: 在當前恢復點0/C02A630之前, 新的時間點2脫離了當前茅的資料庫系統時間點1
2018-06-05 00:08:05.322 PDT [9729] 日誌: 在0/C000000處時間線1上重啟WAL流操作
2018-06-05 00:08:05.327 PDT [9729] 日誌: 複製由主用伺服器終止
2018-06-05 00:08:05.327 PDT [9729] 詳細資訊: WAL結束時,到了時間線1和地址0/C02A400.
2018-06-05 00:08:05.329 PDT [9725] 日誌: 在當前恢復點0/C02A630之前, 新的時間點2脫離了當前茅的資料庫系統時間點1
2018-06-05 00:08:10.328 PDT [9729] 日誌: 在0/C000000處時間線1上重啟WAL流操作
2018-06-05 00:08:10.332 PDT [9729] 日誌: 複製由主用伺服器終止
2018-06-05 00:08:10.332 PDT [9729] 詳細資訊: WAL結束時,到了時間線1和地址0/C02A400.
2018-06-05 00:08:10.333 PDT [9725] 日誌: 在當前恢復點0/C02A630之前, 新的時間點2脫離了當前茅的資料庫系統時間點1
scp
/data/pg/data/pg_wal/
00000002.history 192.168.159.151:/data/pg/data/pg_wal/
vi /data/pg/data/recovery.conf
restore_command = 'cp /data/pg/data/archive/%f %p'
mkdir
/data/pg/data/archive
chown postgres.postgres
/data/pg/data/archive
vi /data/pg/data/postgresql.conf
archive_command = 'cp %p /data/pg/data/archive/%f'
四、安裝PGPOOL
(1)配置兩臺機器的ssh免金鑰登入
1節點
[postgres@node1]$ ssh-keygen -t rsa
全部回車預設
[postgres@node1]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[postgres@node1]$ chmod go-rwx ~/.ssh/*
[postgres@node1]$ cd ~/.ssh
2節點
[postgres@node2$ ssh-keygen -t rsa
全部回車預設
[postgres@node2$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[postgres@node2$ chmod go-rwx ~/.ssh/*
[postgres@node2] cd ~/.ssh
1節點
[postgres@node1]$ scp id_rsa.pub 192.168.159.152:/home/postgres/.ssh/id_rsa.pub1
2節點
[postgres@node2] cat id_rsa.pub1 >> authorized_keys
[postgres@node2]scp id_rsa.pub 192.168.159.151:/home/postgres/.ssh/id_rsa.pub2
1節點
[postgres@node1] cat id_rsa.pub2 >> authorized_keys
(2)安裝pgpool ii
安裝pgpool ii
yum -y install libmemcached postgresql-libs.x86_64
openssl098e
(這裡注意
一定要先安裝這些YUM源,不然死活安裝不了pgpool II
)
rpm -ivh pgpool-II-pg10-3.7.2-1pgdg.rhel6.x86_64.rpm
pg_md5 -u postgres -p
密碼設定為postgres
輸出的密碼編碼為
e8a48653851e28c69d0506508fb27fc5
vi /etc/pgpool-II/pcp.conf #最後一行新增
postgres:e8a48653851e28c69d0506508fb27fc5
mkdir -p /opt/pgpool/oiddir
cp /etc/pgpool-II/pgpool.conf /etc/pgpool-II/pgpool.conf.bak
ifconfig檢視下網路卡
[root@node1 pgpool-II]# ifconfig
eth1 Link encap:Ethernet HWaddr 00:0C:29:9E:E8:6D
inet addr:192.168.159.152 Bcast:192.168.159.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe9e:e86d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14557 errors:0 dropped:0 overruns:0 frame:0
TX packets:10820 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1889055 (1.8 MiB) TX bytes:1485329 (1.4 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:5029 errors:0 dropped:0 overruns:0 frame:0
TX packets:5029 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2786891 (2.6 MiB) TX bytes:2786891 (2.6 MiB)
如果網路卡配置不正確,會報類似這樣的報錯arping: unknown iface eth0
1節點
vi /etc/pgpool-II/pgpool.conf
listen_addresses = '*'
port = 9999
socket_dir = '/opt/pgpool'
pcp_port = 9898
pcp_socket_dir = '/opt/pgpool'
backend_hostname0 = '192.168.159.151' ##配置資料節點 node1
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = '192.168.159.152' ##配置資料節點 node2
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
pool_passwd = 'pool_passwd'
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = 'syslog'
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = 'none'
syslog_facility = 'LOCAL0'
syslog_ident = 'pgpool'
debug_level = 0
pid_file_name = '/opt/pgpool/pgpool.pid'
logdir = '/tmp'
connection_cache = on
reset_query_list = 'ABORT; DISCARD ALL'
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ''
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ''
black_function_list = 'nextval,setval'
master_slave_mode = on # 設定流複製模式
master_slave_sub_mode = 'stream' # 設定流複製模式
sr_check_period = 5
sr_check_user = 'replica'
sr_check_password = 'replica'
delay_threshold = 16000
follow_master_command = ''
parallel_mode = off
pgpool2_hostname = ''
system_db_hostname = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
health_check_period = 5
health_check_timeout = 20
health_check_user = 'replica'
health_check_password = 'replcia'
health_check_max_retries = 3
health_check_retry_delay = 1
failover_command = '/opt/pgpool/failover_stream.sh %d %H
/data/pg
/data/
trigger.kenyon
'
failback_command = ''
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = 'nobody'
recovery_password = ''
recovery_1st_stage_command = ''
recovery_2nd_stage_command = ''
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = on
trusted_servers = ''
ping_path = '/bin'
wd_hostname = '192.168.159.151'
wd_port = 9000
wd_authkey = ''
delegate_IP = '192.168.159.153
'
ifconfig_path = '/sbin'
if_up_cmd = 'ifconfig
eth1:0
inet $_IP_$ netmask 255.255.255.0'
if_down_cmd = 'ifconfig
eth1:0
down'
arping_path = '/usr/sbin' # arping command path
arping_cmd = 'arping -I eth1
-U $_IP_$ -w 1' #-I eth1指定出口網路卡
clear_memqcache_on_escalation = on
wd_escalation_command = ''
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = '192.168.159.152' # 配置對端的 hostname
heartbeat_destination_port0 = 9694
heartbeat_device0 =
'eth1'
wd_life_point = 3
wd_lifecheck_query = 'SELECT 1'
wd_lifecheck_dbname = 'template1'
wd_lifecheck_user = 'nobody'
wd_lifecheck_password = ''
other_pgpool_hostname0 = '192.168.159.152' ## 配置對端的 pgpool
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = 'shmem'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = '/opt/pgpool/oiddir'#(需要現在/opt/pgpool目錄下建立oiddr)
white_memqcache_table_list = ''
black_memqcache_table_list = ''
2節點
vi /etc/pgpool-II/pgpool.conf
listen_addresses = '*'
port = 9999
socket_dir = '/opt/pgpool'
pcp_port = 9898
pcp_socket_dir = '/opt/pgpool'
backend_hostname0 = '192.168.159.151'
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = '192.168.159.152'
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
pool_passwd = 'pool_passwd'
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = 'syslog'
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = 'none'
syslog_facility = 'LOCAL0'
syslog_ident = 'pgpool'
debug_level = 0
pid_file_name = '/opt/pgpool/pgpool.pid'
logdir = '/tmp'
connection_cache = on
reset_query_list = 'ABORT; DISCARD ALL'
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ''
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ''
black_function_list = 'nextval,setval'
master_slave_mode = on
master_slave_sub_mode = 'stream'
sr_check_period = 0
sr_check_user = 'replica'
sr_check_password = 'replica'
delay_threshold = 16000
follow_master_command = ''
parallel_mode = off
pgpool2_hostname = ''
system_db_hostname = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
health_check_period = 0
health_check_timeout = 20
health_check_user = 'nobody'
health_check_password = ''
health_check_max_retries = 0
health_check_retry_delay = 1
failover_command = '/opt/pgpool/failover_stream.sh %d %H /file/data/trigger/file'
failback_command = ''
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = 'nobody'
recovery_password = ''
recovery_1st_stage_command = ''
recovery_2nd_stage_command = ''
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = off
trusted_servers = ''
ping_path = '/bin'
wd_hostname = ' '
wd_port = 9000
wd_authkey = ''
delegate_IP = '192.168.159.153
'
ifconfig_path = '/sbin'
if_up_cmd = 'ifconfig
eth1:0
inet $_IP_$ netmask 255.255.255.0'
if_down_cmd = 'ifconfig
eth1:0
down'
arping_path = '/usr/sbin' # arping command path
arping_cmd = 'arping -I eth1
-U $_IP_$ -w 1' #-I eth1指定出口網路卡
clear_memqcache_on_escalation = on
wd_escalation_command = ''
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = '192.168.159.151'
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth1'
wd_life_point = 3
wd_lifecheck_query = 'SELECT 1'
wd_lifecheck_dbname = 'template1'
wd_lifecheck_user = 'nobody'
wd_lifecheck_password = ''
other_pgpool_hostname0 = '192.168.159.152'
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = 'shmem'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = '/opt/pgpool/oiddir'
white_memqcache_table_list = ''
black_memqcache_table_list = ''
vi /opt/pgpool/failover_stream.sh
#! /bin/sh
# Failover command for streaming replication.
# This script assumes that DB node 0 is primary, and 1 is standby.
#
# If standby goes down, do nothing. If primary goes down, create a
# trigger file so that standby takes over primary node.
#
# Arguments: $1: failed node id. $2: new master hostname. $3: path to
# trigger file.
failed_node=$1
new_master=$2
trigger_file=$3
# Do nothing if standby goes down.
#if [ $failed_node = 1 ]; then
# exit 0;
#fi
/usr/bin/ssh -T $new_master /bin/touch $trigger_file
exit 0;
給指令碼授權
chmod u+x /opt/pgpool/failover_stream.sh
scp /opt/pgpool/failover_stream.sh 192.168.159.152:/opt/pgpool/
cp /etc/pgpool-II/pool_hba.conf /etc/pgpool-II/pool_hba.conf.bak
vi /etc/pgpool-II/pool_hba.conf
host all all 192.168.159.151/32 trust
host replication replica 192.168.159.151/32 trust
host postgres postgres 192.168.159.151/32 trust
host all all 192.168.159.152/32 trust
host replication replica 192.168.159.152/32 trust
host postgres postgres 192.168.159.152/32 trust
host postgres postgres 192.168.159.152/32 trust
host all all 192.168.159.153/32 trust
host replication replica 192.168.159.153/32 trust
host postgres postgres 192.168.159.153/32 trust
host postgres postgres 192.168.159.153/32 trust
注意192.168.159.153 是VIP地址
scp /etc/pgpool-II/pool_hba.conf 192.168.159.140:/etc/pgpool-II/
啟動pgpool
pgpool -n &
關閉pgpool
pgpool -m fast stop
登陸pgpool
/data/pg/bin/psql -h 192.168.159.151 -p 9999 -U postgres -d postgres
也可以用VIP登陸
/data/pg/bin/psql -h 192.168.159.153 -p 9999 -U postgres -d postgres
檢視pgpool節點
show pool_nodes;
postgres=# show pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-----------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | 192.168.159.151 | 5432 | up | 0.500000 | primary | 0 | true | 0
1 | 192.168.159.152 | 5432 | down | 0.500000 | standby | 0 | false | 0
(2 rows)
五、安裝keepalive
tar xvf keepalived-1.4.2.tar.gz
cd
keepalived-1.4.2
./configure
make
make install
mkdir /etc/keepalived
cd /etc/keepalived/
1節點:
vi
/etc/keepalived/
keepalived.conf
global_defs {
router_id node1
}
vrrp_instance VI_1 {
state BACKUP #設定為主伺服器
interface eth1:0 #監測網路介面
virtual_router_id 51 #主、備必須一樣
priority 100 #(主、備機取不同的優先順序,主機值較大,備份機值較小,值越大優先順序越高)
advert_int 1 #VRRP Multicast廣播週期秒數
authentication {
auth_type PASS #VRRP認證方式,主備必須一致
auth_pass 1111 #(密碼)
}
virtual_ipaddress {
192.168.159.153/24 #VRRP HA虛擬地址
}
vi /etc/keepalived/keepalived.conf
global_defs {
router_id node2
}
vrrp_instance VI_1 {
state BACKUP #設定為主伺服器
interface eth1:0 #監測網路介面
virtual_router_id 51 #主、備必須一樣
priority 90 #(主、備機取不同的優先順序,主機值較大,備份機值較小,值越大優先順序越高)
advert_int 1 #VRRP Multicast廣播週期秒數
authentication {
auth_type PASS #VRRP認證方式,主備必須一致
auth_pass 1111 #(密碼)
}
virtual_ipaddress {
192.168.159.153/24 #VRRP HA虛擬地址
}
啟動Keepalived
keepalived -D -f /etc/keepalived/keepalived.conf
檢視日誌
tail -f /var/log/message
檢視程式
ps -ef|grep keepalive
!!!!!注意!!!!!!! 配置PGPOOL的高可用,以下內容為本人親測,部分關鍵性
資
料
是
自
己摸索
編
寫的
,
網
上
找
不
到
資料
1、設定相關許可權(兩個節點都要執行)
--配置 ifconfig, arping 執行許可權 root使用者下執行
chmod u+s /sbin/ifconfig
chmod u+s /sbin/ifdown
chmod u+s /sbin/ifup
chmod u+s /usr/sbin/
chmod 755 /opt/pgpool/failover_stream.sh
chown postgres.root /opt/pgpool/failover_stream.sh
2、配置PGPOOL日誌
(兩個節點都要執行
)最後一行新增
vi /etc/rsyslog.conf
local0.* /var/log/pgpool.log
/etc/init.d/rsyslog restart
3、配置關鍵指令碼
failover_stream.sh
(兩個節點都要執行
)
將原來的那個ssh那行刪除或註釋
主庫是192.168.159.151時
vi /opt/pgpool/failover_stream.sh
ifconfig eth1:0 down
/usr/bin/ssh 192.168.159.152 /bin/touch /data/pg/data/
trigger.kenyon
/usr/bin/ssh 192.168.159.152 ifconfig eth1:0 up
主庫是192.168.159.152時
vi /opt/pgpool/
failover_st
ream.sh
ifconfig eth1:0 down
/usr/bin/ssh 192.168.159.
151 /bin/touch /data/pg/data/
trigger.kenyon
/usr/bin/ssh 192.168.159.
151 ifconfig eth1:0 up
4、複製一個eth1:0的網路卡
(兩個節點都要執行
)
cd /etc/sysconfig/network-scripts/
DEVICE="eth1:0"
BOOTPROTO="static"
HWADDR="00:0c:29:0c:7d:4f"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
#UUID="e618ec6a-8bb0-4202-8fe6-54febd0f8c76"
IPADDR=192.168.159.153
NETMASK=255.255.255.0
GATEWAY=192.168.159.1
5、修改
pgpool.conf配置檔案
vi /etc/pgpool-II/pgpool.conf
failover_command = '/opt/pgpool/failover_stream.sh'
將原來的那行註釋掉,用這種方式就可以了
6、修改 pgpool.conf配置檔案
vi /etc/pgpool-II/pgpool.conf
heartbeat_device0 = 'eth1:0'
在本文上面的第三大點的第四小點有手動主備切換的步驟
VIP暫時還是不能自動漂浮切換,但是可以手動切換主備(按上面的配置會出現兩個節點都有VIP192.168.159.153,這就很奇怪)
目前手動切換主備,可以實現VIP自動漂浮切換,但是前提條件是pgpool必須停止,比如主節點的postgresql資料庫例項停止了,那同時主節點pgpool也要停止,這樣過幾分鐘左右,從節點的vip192.168.159.153就會自動起來。
一定要注意的是,主備切換動作完成後,要用命令檢視及測試切換是否成功,比如登陸
/data/pg/bin/psql -h 192.168.159.153 -p 9999 -U postgres -d postgres
show pool_nodes;
select client_addr,sync_state from pg_stat_replication;
這些查到的資訊確認正確後,再嘗試create一個測試table看是否能夠建立
create table test123 (tt int);
注意,在/data/pg/data/gprof目錄下,有很多的一些二進位制檔案,不知道是什麼,會佔用大量的儲存空間。請教大神指導,哈哈
PG主從+pgpool ii 搭建完成後 相關報錯
1、PG無法登陸問題
原本第一步搭建好PG主從,測試主從同步功能,登陸都沒問題
但是在後續安裝配置pgpool ii 高可用的時候,突然發現PG無法登陸了,報錯如下:
[postgres@node1 ~]$ psql
psql: symbol lookup error: psql: undefined symbol: PQconnectdbParams
[postgres@node1 ~]$ /opt/postgresql-10.3/bin/pg_ctl -D /data/pg/data -l logfile start
可以在postgres家目錄檢視logfile日誌報錯資訊:
2018-05-31 23:00:18.703 PDT [12734] 致命錯誤: 無法載入庫 "/opt/postgresql-10.3/lib/libpqwalreceiver.so": /opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier
2018-05-31 23:00:23.709 PDT [12736] 致命錯誤: 無法載入庫 "/opt/postgresql-10.3/lib/libpqwalreceiver.so": /opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier
2018-05-31 23:00:28.715 PDT [12737] 致命錯誤: 無法載入庫 "/opt/postgresql-10.3/lib/libpqwalreceiver.so": /opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier
2018-05-31 23:00:33.721 PDT [12738] 致命錯誤: 無法載入庫 "/opt/postgresql-10.3/lib/libpqwalreceiver.so": /opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier
2018-05-31 23:00:38.730 PDT [12739] 致命錯誤: 無法載入庫 "/opt/postgresql-10.3/lib/libpqwalreceiver.so": /opt/postgresql-10.3/lib/libpqwalreceiver.so: undefined symbol: PQescapeIdentifier
這個時候,可以臨時執行
export LD_LIBRARY_PATH=/opt/postgresql-10.3/lib/libpqwalreceiver.so
載入缺失的庫檔案,再重新啟動PG就可以登入了
想要永久的解決,如下:
vi ~/.bash_profile
在最後一行新增
export LD_LIBRARY_PATH=/opt/postgresql-10.3/lib/libpqwalreceiver.so
2、PGPOOL無法啟動問題
使用命令pgpool -n & 啟動pgpool,發現無法啟動
[root@node1 ~]# ps -ef|grep pgpool
root 3163 3081 0 19:57 pts/0 00:00:00 pgpool -n
root 3205 3163 0 19:57 pts/0 00:00:00 pgpool: health check process(0)
root 3206 3163 0 19:57 pts/0 00:00:02 pgpool: health check process(1)
root 4505 4455 0 20:37 pts/1 00:00:00 grep pgpool
ps命令檢視pgpool程式,發現存在殘留程式
kill 3205
kill 3206
再次啟動pgpool成功
成功啟動的pgpool是以下這樣的
[root@node1 ~]# ps -ef|grep pool
root 12828 2231 0 19:58 pts/0 00:00:00 pgpool -n
root 12829 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12830 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12831 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12832 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12833 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12834 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12835 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12836 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12837 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12838 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12839 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12840 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12841 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12842 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12843 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12844 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12845 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12846 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12847 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12848 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12849 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12850 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12851 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12852 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12853 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12854 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12855 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12856 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12857 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12858 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12859 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12860 12828 0 19:58 pts/0 00:00:00 pgpool: wait for connection request
root 12861 12828 0 19:58 pts/0 00:00:00 pgpool: PCP: wait for connection request
root 12862 12828 0 19:58 pts/0 00:00:00 pgpool: worker process
root 12863 12828 0 19:58 pts/0 00:00:00 pgpool: health check process(0)
root 12864 12828 0 19:58 pts/0 00:00:00 pgpool: health check process(1)
root 14061 14045 0 20:37 pts/1 00:00:00 grep pool
3、PG資料庫無法啟動
[postgres@node2 data]$ /opt/postgresql-10.3/bin/pg_ctl -D /data/pg/data/ -l logfile start
報錯:
等待伺服器程式啟動 .... 已停止等待
pg_ctl: 無法啟動伺服器程式
檢查日誌輸出.
按報錯提示檢視日誌
tail logfile
2018-05-30 22:40:05.208 PDT [16383] 日誌: 在0/8000130上已到達一致性恢復狀態
2018-05-30 22:40:05.208 PDT [16382] 日誌: 資料庫系統準備接受只讀請求的連線
2018-05-30 22:40:05.242 PDT [16387] 日誌: 在時間點: 0/C000000 (時間安排1)啟動日誌的流操作
2018-05-30 23:19:59.272 PDT [16382] 日誌: 接到到智慧 (smart) 停止請求
2018-05-30 23:19:59.325 PDT [16387] 致命錯誤: 由於管理員命令中斷walreceiver程式
2018-05-30 23:19:59.332 PDT [16384] 日誌: 正在關閉
2018-05-30 23:19:59.426 PDT [16382] 日誌: 資料庫系統已關閉
2018-06-03 23:59:31.974 PDT [15817] 致命錯誤: 無法寫入鎖檔案 "postmaster.pid": 裝置上沒有空間
2018-06-04 00:00:32.287 PDT [15840] 致命錯誤: 無法寫入鎖檔案 "postmaster.pid": 裝置上沒有空間
2018-06-04 00:01:54.556 PDT [15867] 致命錯誤: 無法寫入鎖檔案 "postmaster.pid": 裝置上沒有空間
df -h檢視磁碟空間,果然磁碟空間不足
[postgres@node2 data]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 18G 17G 18M 100% /
tmpfs 242M 72K 242M 1% /dev/shm
/dev/sda1 291M 39M 238M 14% /boot
[postgres@node2 data]$
4、主從切換後,從庫日誌報錯
主從切換後,從庫logfile日誌報錯:
2018-07-01 21:08:41.889 PDT [2644] 日誌: listening on IPv4 address "0.0.0.0", port 5432
2018-07-01 21:08:41.889 PDT [2644] 日誌: listening on IPv6 address "::", port 5432
2018-07-01 21:08:41.893 PDT [2644] 日誌: listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-01 21:08:41.954 PDT [2645] 日誌: 資料庫上次關閉時間為 2018-07-01 21:08:41 PDT
2018-07-01 21:08:42.008 PDT [2644] 日誌: 資料庫系統準備接受連線
從庫的安裝目錄需增加檔案recovery.conf
且需配置如下:
vi /data/pg/data/recovery.conf
--未完待續
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28371090/viewspace-2155459/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- POSTGRESQL10.3原始碼安裝主從搭建SQL原始碼
- MYSQL5.7.22 原始碼安裝 主從搭建 + KEEPALIVED高可用MySql原始碼
- MYSQL5.6.40原始碼安裝 主從搭建 主主搭建MySql原始碼
- POSTGRESQL10.8原始碼安裝主從搭建SQL原始碼
- MYSQL5.7.22 原始碼安裝 主從搭建MySql原始碼
- MYSQL5.6主從+keepalive高可用自動切換MySql
- POSTGRESQL10.3 RPM包 主從搭建SQL
- Linux原始碼安裝RabbitMQ高可用叢集Linux原始碼MQ
- 【工具-Nginx】從入門安裝到高可用叢集搭建Nginx
- 高可用之KeepAlive工作原理
- keepalive +nginx (主從)Nginx
- CentOS 7.4上透過keepalive實現Tomcat高可用CentOSTomcat
- CentOS6下OpenLDAP+PhpLdapAdmin基本安裝及主從/主主高可用模式部署記錄CentOSLDAPHP模式
- LVS+Keepalive 實現負載均衡高可用叢集負載
- MHA+MySQL主從配置實現MySQL高可用MySql
- Redis高可用-主從,哨兵,叢集Redis
- Redis5.0 主從模式和高可用 搭建和測試報告Redis模式測試報告
- Redis高可用之戰:主從架構Redis架構
- MySQL叢集搭建(6)-雙主+keepalived高可用MySql
- 從原始碼安裝GDB-8.1原始碼
- Mycat 雙主雙從-負載均衡-高可用負載
- Mysql雙主雙從高可用叢集的搭建且與MyCat進行整合MySql
- MySQL資料庫各場景主從高可用架構實戰MySql資料庫架構
- Redis for linux原始碼&叢集(cluster)&主從(master-slave)&哨兵(sentinel)安裝配置RedisLinux原始碼AST
- Redis三種高可用模式:主從、哨兵、叢集Redis模式
- PostgreSQL13.7的安裝與主從搭建以及備份方法SQL
- 【DB寶18】在Docker中安裝使用MySQL高可用之MGRDockerMySql
- Mysql8.0.13安裝&主從MySql
- docker 安裝 Redis 主從容器DockerRedis
- 個人免籤碼支付原始碼+監控APP 實測搭建成功原始碼APP
- Kubernetes安裝之一:HA-高可用配置
- 使用CheckInstall從原始碼製作RPM安裝包原始碼
- [環境搭建] 透過原始碼編譯安裝 Redis原始碼編譯Redis
- MySQL 5.7的安裝及主從複製(主從同步)MySql主從同步
- MySQL高可用架構之Keepalived+主從架構部署MySql架構
- Nginx 實踐案例(原始碼編譯安裝方式):利用LNMP搭建wordpress站點Nginx原始碼編譯LNMP
- zookeeper 高可用叢集搭建
- MongoDB高可用叢集搭建MongoDB