GP完整安裝手冊
參考GP的安裝手冊,結合自己安裝過程中的點點滴滴,重新梳理了配置的步驟和順序,絕對比GP的手冊更方便,防止在安裝的過程中走岔路,尤其是在多臺機器進行安裝配置的時候。
------------------------------------------------------------------------------------
對於PC伺服器自帶的儲存考慮
------------------------------------------------------------------------------------
PC伺服器的推薦配置:
2*4 CPU
8個千兆網路卡(master節點需要增加兩張網路卡,對外提供服務)
24*1TSAS盤配置 該盤用於放置資料庫資料檔案,建議的raid配置為4組raid10,
每組6塊盤,raid10的條帶設定為128K,不配置hotspare盤
2*300G 用於安裝作業系統
------------------------------------------------------------------------------------
檔案系統設定要求
------------------------------------------------------------------------------------
MASTER NODE
/ 48G EXT3
swap 48G SWAP
/data 200G XFS
SEGMENT NODE
/ 48G EXT3
swap 48G SWAP
/data1 6T XFS
/data2 6T XFS
##############################################################################################
1)修改IO排程演算法
由於修改的是/boot/grub/menu.lst檔案,稍有不慎會導致系統無法啟動
並且/boot/grub/menu.lst在同樣的安裝下,經常會有些差異,不建議直接分發Master修改的結果
建議單獨到每臺機器上進行配置,包括Master,Standby Master,Segment Host,ETL Host
主機名很多時候也需要重新設定,也需要每臺單獨設定
##############################################################################################
------------------------------------------------------------------------------------
1.1)修改IO排程演算法
------------------------------------------------------------------------------------
vi /boot/grub/menu.lst
root=/dev/VolGroup00/LogVol00 rhgb noapic quiet elevator=deadline
After the next reboot, use this command to verify that the correct scheduler is in use:
cat /sys/block/*/queue/scheduler
Each line of output shown should have the word [deadline] surrounded as shown in square brackets.
noop anticipatory [deadline] cfq
2.6核心的四種排程演算法
In the 2.6 kernel series, there are four interchangeable schedulers, as follows:
cfq- “Completely Fair Queuing” makes a good default for most workloads on general-purpose servers.
as - “Anticipatory Scheduler” is best for workstations and other systems with slow, single-spindle storage.
deadline - “Deadline” is a relatively simple scheduler which tries to minimize I/O latency by re-ordering requests to improve performance.
noop- “NOOP” is the most simple scheduler of all, and is really just a single FIFO queue.
With newer Linux kernels (Red Hat Enterprise Linux v3 Update 3 does not have this feature. It is present in the main Linux tree as of 2.6.15), one can change the scheduler while running.
在執行中,也可以修改IO排程演算法:
比如:
echo deadline > /sys/block/sdb/queue/scheduler
------------------------------------------------------------------------------------
1.2)設定主機名
------------------------------------------------------------------------------------
hostname mdw
vi /etc/sysconfig/network
HOSTNAME=mdw
#vi /etc/hosts
這一步可以先不做,後面會有完整的配置
##############################################################################################
2)
接下來的配置基本都是單獨在Master上執行的
如果特別需要在segment進行配置的,會單獨進行說明
##############################################################################################
------------------------------------------------------------------------------------
2.1)設定全域性的profile
------------------------------------------------------------------------------------
vi /etc/profile
# Set user prompt by xigua
if [ $LOGNAME = "root" ]; then
PS1=`hostname`':$PWD#'
else
PS1=`hostname`':$PWD$'
umask 022
fi
set -o vi
# End set
------------------------------------------------------------------------------------
2.2)設定啟動級別到3
------------------------------------------------------------------------------------
vi /etc/inittab
id:3:initdefault:
------------------------------------------------------------------------------------
2.3)配置核心引數和使用者限額
------------------------------------------------------------------------------------
vi /etc/sysctl.conf
# Set for GreenPlum
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 1
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.sem = 250 64000 100 512
kernel.shmmni = 4096
kernel.shmmax = 500000000
kernel.shmall = 4000000000
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_max_syn_backlog=4096
net.core.netdev_max_backlog=10000
vm.overcommit_memory=2
net.ipv4.conf.all.arp_filter = 1
# End set of GreenPlum
執行下面的命令使上面的配置生效:
sysctl -p
vi /etc/security/limits.conf
#Set for GreenPlum
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
#End set of GreenPlum
------------------------------------------------------------------------------------
2.4)配置時鐘同步服務
------------------------------------------------------------------------------------
vi /etc/ntp.conf
server 132.224.200.3 prefer
server mdw
------------------------------------------------------------------------------------
2.5)設定read-ahead到rc.local
------------------------------------------------------------------------------------
The changes to this file force the disk read-ahead buffers to larger than normal.
Each disk device file should have a read-ahead value of 16384.
vi /etc/rc.local
#Change the disk read-ahead buffers to larger than normal.
#Change for Dell
blockdev --setra 16384 /dev/sd*
#Change for HP
blockdev --setra 16384 /dev/cciss/c?d?*
After the next reboot, issue this command to verify the setting:
blockdev --getra /dev/sd*
------------------------------------------------------------------------------------
2.6)關閉selinux
------------------------------------------------------------------------------------
vi /etc/selinux/config
SELINUX=disabled
------------------------------------------------------------------------------------
2.7)配置/etc/hosts
------------------------------------------------------------------------------------
vi /etc/hosts
#Master node
132.224.36.207 mdw-ext
192.168.1.250 mdw-1 mdw
192.168.2.250 mdw-2
192.168.3.250 mdw-3
192.168.4.250 mdw-4
#Standby master node
132.224.36.208 smdw-ext
192.168.1.251 smdw-1 smdw
192.168.2.251 smdw-2
192.168.3.251 smdw-3
192.168.4.251 smdw-4
#Segment node1
192.168.1.1 sdw1-1 sdw1
192.168.2.1 sdw1-2
192.168.3.1 sdw1-3
192.168.4.1 sdw1-4
#Segment node2
192.168.1.2 sdw2-1 sdw2
192.168.2.2 sdw2-2
192.168.3.2 sdw2-3
192.168.4.2 sdw2-4
#Segment node3
192.168.1.3 sdw3-1 sdw3
192.168.2.3 sdw3-2
192.168.3.3 sdw3-3
192.168.4.3 sdw3-4
#Segment node4
192.168.1.4 sdw4-1 sdw4
192.168.2.4 sdw4-2
192.168.3.4 sdw4-3
192.168.4.4 sdw4-4
#Segment node5
192.168.1.5 sdw5-1 sdw5
192.168.2.5 sdw5-2
192.168.3.5 sdw5-3
192.168.4.5 sdw5-4
#Segment node6
192.168.1.6 sdw6-1 sdw6
192.168.2.6 sdw6-2
192.168.3.6 sdw6-3
192.168.4.6 sdw6-4
#ETL node1
132.224.36.203 etl1-ext1
132.224.36.204 etl1-ext2
132.224.36.203 etl1-1 etl1
132.224.36.204 etl1-2
#ETL node2
132.224.36.205 etl2-ext1
132.224.36.206 etl2-ext2
132.224.36.205 etl2-1 etl2
132.224.36.206 etl2-2
為方便後面GP配置命令使用,配置一個包含所有節點的host檔案
cat > /root/allhost
mdw-ext
mdw
mdw-1
mdw-2
mdw-3
mdw-4
smdw-ext
smdw
smdw-1
smdw-2
smdw-3
smdw-4
sdw1
sdw1-1
sdw1-2
sdw1-3
sdw1-4
sdw2
sdw2-1
sdw2-2
sdw2-3
sdw2-4
sdw3
sdw3-1
sdw3-2
sdw3-3
sdw3-4
sdw4
sdw4-1
sdw4-2
sdw4-3
sdw4-4
sdw5
sdw5-1
sdw5-2
sdw5-3
sdw5-4
sdw6
sdw6-1
sdw6-2
sdw6-3
sdw6-4
etl1-ext1
etl1-ext2
etl1
etl1-1
etl1-2
etl2-ext1
etl2-ext2
etl2
etl2-1
etl2-2
配置一個不包含ETL節點的host檔案
grep -v etl allhost > /root/allgphost
配置一個包含所有Segment host節點的host檔案
grep sdw[0-9]-[0-9] allhost > /root/allgpseg
配置一個包含所有ETL host節點的host檔案
grep etl[0-9] allhost |grep -v - > /root/alletl
------------------------------------------------------------------------------------
2.8)設定master節點到所有segment節點的ssh互信,root使用者的
互信的配置可以極大的簡化後面配置的複雜度,如果對於上百臺機器
的配置,那可以巨大的節約工作量
------------------------------------------------------------------------------------
方法一:
mkdir ~/.ssh
ssh-keygen -t rsa
cp id_rsa.pub authorized_keys
將公共金鑰分發到所有的segment節點
for iter in `cat /etc/hosts | grep sdw[0-9]-[0-9.] | awk '{print $2}'`; do
ssh root@${iter} " mkdir ~/.ssh ; chmod 700 ~/.ssh "
scp authorized_keys root@${iter}:~/.ssh
done
將公共金鑰分發到master的standby節點
for iter in `cat /etc/hosts | grep smdw | awk '{print $2}'`; do
ssh root@${iter} " mkdir ~/.ssh ; chmod 700 ~/.ssh "
scp authorized_keys root@${iter}:~/.ssh
done
方法二:(推薦使用)
對於互信的設定,更為簡單的方法是利用GP自帶的命令進行設定,因此可以在Master節點先上傳和安裝GP軟體
mkdir /gpsoft
ftp && put greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.zip
unzip greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.zip
sh greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.bin
安裝完gp軟體之後,就可以利用GP提供的gpssh-exkeys命令來簡單的配置互信了。
source /usr/local/greenplum-db/greenplum_path.sh
gpssh-exkeys -f /root/allhost
##############################################################################################
3)
互信配置好了,接下來的配置工作就相對來說輕鬆的多了
以上的配置基本都是在Master完成,然後利用配置的互信和GP的分發命令分發到
包括Master,Standby Master,Segment Host,ETL Host在內的所有伺服器上。
接下來的配置和安裝,都在Master利用GP分發命令直接執行了
為大家清晰所有操作所屬的節點,特意在此說明
##############################################################################################
------------------------------------------------------------------------------------
3.1)首先將前面Master的設定分發到所有Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
利用GP提供的基於互信的命令進行分發,注意在分發之前先進行備份
source /usr/local/greenplum-db/greenplum_path.sh
cd /root
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/profile /etc/profile.bak'
gpscp -f allhost /etc/profile =:/etc/profile
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/inittab /etc/inittab.bak'
gpscp -f allhost /etc/inittab =:/etc/inittab
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/sysctl.conf /etc/sysctl.conf.bak'
gpscp -f allhost /etc/sysctl.conf =:/etc/sysctl.conf
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/security/limits.conf /etc/security/limits.conf.bak'
gpscp -f allhost /etc/security/limits.conf =:/etc/security/limits.conf
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/hosts /etc/hosts.bak'
gpscp -f allhost /etc/hosts =:/etc/hosts
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/ntp.conf /etc/ntp.conf.bak'
gpscp -f allhost /etc/ntp.conf =:/etc/ntp.conf
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/rc.local /etc/rc.local.bak'
gpscp -f allhost /etc/rc.local =:/etc/rc.local
gpssh -f /root/allhost -v -e '/bin/cp -f /etc/selinux/config /etc/selinux/config.bak'
gpscp -f allhost /etc/selinux/config =:/etc/selinux/config
------------------------------------------------------------------------------------
3.2)配置服務,包括Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
gpssh -f /root/allhost -v -e 'service sendmail stop'
gpssh -f /root/allhost -v -e 'chkconfig sendmail off'
gpssh -f /root/allhost -v -e 'service ipmi start'
gpssh -f /root/allhost -v -e 'chkconfig ipmi on'
gpssh -f /root/allhost -v -e 'service iptables stop'
gpssh -f /root/allhost -v -e 'chkconfig iptables off'
gpssh -f /root/allhost -v -e 'service ip6tables stop'
gpssh -f /root/allhost -v -e 'chkconfig ip6tables off'
gpssh -f /root/allhost -v -e 'chkconfig avahi-daemon off'
gpssh -f /root/allhost -v -e 'chkconfig avahi-dnsconfd off'
gpssh -f /root/allhost -v -e 'chkconfig conman off'
gpssh -f /root/allhost -v -e 'chkconfig bluetooth off'
gpssh -f /root/allhost -v -e 'chkconfig cpuspeed off'
gpssh -f /root/allhost -v -e 'chkconfig setroubleshoot off'
gpssh -f /root/allhost -v -e 'chkconfig hidd off'
gpssh -f /root/allhost -v -e 'chkconfig hplip off'
gpssh -f /root/allhost -v -e 'chkconfig isdn off'
gpssh -f /root/allhost -v -e 'chkconfig kudzu off'
gpssh -f /root/allhost -v -e 'chkconfig yum-updatesd off'
gpssh -f /root/allhost -v -e 'chkconfig ntpd on'
gpssh -f /root/allhost -v -e 'service ntpd start'
------------------------------------------------------------------------------------
3.3)建立xfs檔案系統,包括Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
上傳安裝檔案然後分發
kmod-xfs-0.4-2.x86_64.rpm
xfsprogs-2.9.4-4.el5.x86_64.rpm
gpscp -f /root/allhost kmod-xfs-0.4-2.x86_64.rpm =:/root
gpscp -f /root/allhost xfsprogs-2.9.4-4.el5.x86_64.rpm =:/root
安裝rpm包
gpssh -f allhost -ev 'rpm -ivh kmod-xfs-0.4-2.x86_64.rpm xfsprogs-2.9.4-4.el5.x86_64.rpm'
手工載入xfs模組
gpssh -f allhost -ev 'modprobe xfs'
檢查結果
gpssh -f allhost -ev 'lsmod | grep xfs'
檢查需要格式化的盤,在我這裡每臺機器的配置相同,需要格式化的盤有兩塊:/dev/sdb,/dev/sdd
gpssh -f allhost -ev 'fdisk -l |grep Disk'
格式硬碟
gpssh -h etl1 -h etl2 -ev 'mkfs.xfs -f /dev/sdb'
gpssh -h etl1 -h etl2 -ev 'mkfs.xfs -f /dev/sdd'
gpssh -f /root/allgpseg -ev 'mkfs.xfs -f /dev/sdb'
gpssh -f /root/allgpseg -ev 'mkfs.xfs -f /dev/sdd'
建立檔案系統掛載目錄
gpssh -h etl1 -h etl2 -ev 'mkdir –p /data1'
gpssh -h etl1 -h etl2 -ev 'mkdir –p /data2'
gpssh -f /root/allgpseg -ev 'mkdir –p /data1'
gpssh -f /root/allgpseg -ev 'mkdir –p /data2'
gpssh -f /root/allgpseg -ev 'echo "/dev/sdb /data1 xfs noatime,inode64,allocsize=16m 1 1" >> /etc/fstab '
gpssh -f /root/allgpseg -ev 'echo "/dev/sdd /data2 xfs noatime,inode64,allocsize=16m 1 1" >> /etc/fstab '
gpssh -h etl1 -h etl2 -ev 'echo "/dev/sdb /data1 xfs noatime,inode64,allocsize=16m 1 1" >> /etc/fstab '
gpssh -h etl1 -h etl2 -ev 'echo "/dev/sdd /data2 xfs noatime,inode64,allocsize=16m 1 1" >> /etc/fstab '
gpssh -h etl1 -h etl2 -ev 'mount /data1'
gpssh -h etl1 -h etl2 -ev 'mount /data2'
gpssh -f /root/allgpseg -ev 'mount /data1'
gpssh -f /root/allgpseg -ev 'mount /data2'
gpssh -f /root/allhost -ev 'df -h'
------------------------------------------------------------------------------------
3.4)在master上建立組和使用者,包括Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
gpssh -f /root/allhost -ev 'groupadd -g 3030 gpadmin'
gpssh -f /root/allhost -ev 'groupadd -g 3040 gpmon'
gpssh -f /root/allhost -ev 'useradd -u 3030 -g gpadmin -d /home/gpadmin -s /bin/bash -m gpadmin'
gpssh -f /root/allhost -ev 'useradd -u 3040 -g gpmon -d /home/gpmon -s /bin/bash -m gpmon'
修改密碼:
gpssh -f /root/allhost -ev 'echo "gpadmin" > /tmp/tmp.txt && passwd gpadmin --stdin < /tmp/tmp.txt && rm -f /tmp/tmp.txt'
gpssh -f /root/allhost -ev 'echo "gpmon" > /tmp/tmp.txt && passwd gpadmin --stdin < /tmp/tmp.txt && rm -f /tmp/tmp.txt'
------------------------------------------------------------------------------------
3.5)安裝gp軟體,包括Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
首先在Master上安裝GP軟體,上傳安裝檔案greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.zip
mdw#cd /tmp
mdw#unzip greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.zip
mdw#sh greenplum-db-4.2.0.0-build-5-RHEL5-x86_64.bin
mdw#source /usr/local/greenplum-db/greenplum_path.sh
其實在步驟2.8)中已經完成Master節點的安裝了
在segment節點上分發安裝gp軟體,root使用者直接用gpseginstall命令分發安裝
首先將allhost之類的檔案拷貝到gpadmin使用者目錄下
cp /root/all* /home/gpadmin
chown gpadmin:gpadmin /home/gpadmin/all*
chmod 644 /home/gpadmin/all*
分發軟體有兩種方法一(推薦):
gpscp -f allgphost -r /usr/local/greenplum-db-4.2.0.0 =:/usr/local/greenplum-db-4.2.0.0
gpssh -f allgphost -ev 'cd /usr/local; rm -rf greenplum-db; ln -fs greenplum-db-4.2.0.0 greenplum-db'
gpssh -f allgphost -ev 'chown -R gpadmin:gpadmin /usr/local/greenplum-db*'
gpscp -h etl1 -h etl2 -r /usr/local/greenplum-db-4.2.0.0 =:/usr/local/greenplum-db-4.2.0.0
gpssh -h etl1 -h etl2 -ev 'cd /usr/local; rm -rf greenplum-db; ln -fs greenplum-db-4.2.0.0 greenplum-db'
gpssh -h etl1 -h etl2 -ev 'chown -R gpadmin:gpadmin /usr/local/greenplum-db*'
方法二:
gpseginstall -f allgphost -u gpadmin -p gpadmin
檢查所有segment節點的安裝結果
gpssh -f allseg -e ll $GPHOME
------------------------------------------------------------------------------------
3.6)配置gpadmin的環境變數,包括Master,Standby Master,Segment Host,ETL Host
------------------------------------------------------------------------------------
在Master和Standby Master上建立/gpmaster目錄
mkdir /gpmaster ; chown -R gpadmin:gpadmin /gpmaster
切換到gpadmin使用者
su - gpadmin
vi /home/gpadmin/.bash_profile
. /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/gpmaster/gpseg-1
export PGPORT=5432
export PGUSER=gpadmin
export PGDATABASE=gpyxpt
分發到所有的其他節點
su - gpadmin
gpscp -f allhost .bash_profile =:/home/gpadmin
------------------------------------------------------------------------------------
3.7)配置資料檔案目錄
------------------------------------------------------------------------------------
su - root
gpssh -f /home/gpadmin/allgpseg -e 'chown -R gpadmin:gpadmin /data1'
gpssh -f /home/gpadmin/allgpseg -e 'chown -R gpadmin:gpadmin /data2'
su - gpadmin
gpssh -f /home/gpadmin/allgpseg -ev 'mkdir -p /data1/primary'
gpssh -f /home/gpadmin/allgpseg -ev 'mkdir -p /data2/primary'
gpssh -f /home/gpadmin/allgpseg -ev 'mkdir -p /data1/mirror'
gpssh -f /home/gpadmin/allgpseg -ev 'mkdir -p /data2/mirror'
最後一次重啟主機
gpssh -f /home/gpadmin/allhost -ev 'init 6'
------------------------------------------------------------------------------------
3.8)配置gp的配置檔案
------------------------------------------------------------------------------------
su - gpadmin
mkdir gpconfigs
cd gpconfigs
cat > gpinitsystem_config
ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data2/primary /data2/primary)
MASTER_HOSTNAME=mdw
MASTER_DIRECTORY=/gpmaster
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data2/mirror /data2/mirror)
MASTER_MAX_CONNECT=25
DATABASE_NAME=gpyxpt
MACHINE_LIST_FILE=/home/gpadmin/allgpseg
------------------------------------------------------------------------------------
3.9)建立gp例項
------------------------------------------------------------------------------------
方法一:連帶Standby Master一起建立
gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -s smdw -S
方法二:先建立Master,然後單獨新增Standby Master
gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config
gpinitstandby -s smdw
檢查資料庫建立完後的情況:
mdw:/home/gpadmin$gpstate -s | grep Address |wc -l
48
$psql
psql (8.2.15)
Type "help" for help.
gpyxpt=# select * from gp_segment_configuration ;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
1 | -1 | p | p | s | u | 5432 | mdw | mdw | |
2 | 0 | p | p | s | u | 40000 | sdw1 | sdw1-1 | 41000 |
6 | 4 | p | p | s | u | 40000 | sdw2 | sdw2-1 | 41000 |
10 | 8 | p | p | s | u | 40000 | sdw3 | sdw3-1 | 41000 |
14 | 12 | p | p | s | u | 40000 | sdw4 | sdw4-1 | 41000 |
18 | 16 | p | p | s | u | 40000 | sdw5 | sdw5-1 | 41000 |
22 | 20 | p | p | s | u | 40000 | sdw6 | sdw6-1 | 41000 |
3 | 1 | p | p | s | u | 40001 | sdw1 | sdw1-2 | 41001 |
7 | 5 | p | p | s | u | 40001 | sdw2 | sdw2-2 | 41001 |
11 | 9 | p | p | s | u | 40001 | sdw3 | sdw3-2 | 41001 |
15 | 13 | p | p | s | u | 40001 | sdw4 | sdw4-2 | 41001 |
19 | 17 | p | p | s | u | 40001 | sdw5 | sdw5-2 | 41001 |
23 | 21 | p | p | s | u | 40001 | sdw6 | sdw6-2 | 41001 |
4 | 2 | p | p | s | u | 40002 | sdw1 | sdw1-3 | 41002 |
8 | 6 | p | p | s | u | 40002 | sdw2 | sdw2-3 | 41002 |
12 | 10 | p | p | s | u | 40002 | sdw3 | sdw3-3 | 41002 |
16 | 14 | p | p | s | u | 40002 | sdw4 | sdw4-3 | 41002 |
20 | 18 | p | p | s | u | 40002 | sdw5 | sdw5-3 | 41002 |
24 | 22 | p | p | s | u | 40002 | sdw6 | sdw6-3 | 41002 |
5 | 3 | p | p | s | u | 40003 | sdw1 | sdw1-4 | 41003 |
9 | 7 | p | p | s | u | 40003 | sdw2 | sdw2-4 | 41003 |
13 | 11 | p | p | s | u | 40003 | sdw3 | sdw3-4 | 41003 |
17 | 15 | p | p | s | u | 40003 | sdw4 | sdw4-4 | 41003 |
21 | 19 | p | p | s | u | 40003 | sdw5 | sdw5-4 | 41003 |
25 | 23 | p | p | s | u | 40003 | sdw6 | sdw6-4 | 41003 |
46 | 20 | m | m | s | u | 50000 | sdw1 | sdw1-2 | 51000 |
26 | 0 | m | m | s | u | 50000 | sdw2 | sdw2-2 | 51000 |
30 | 4 | m | m | s | u | 50000 | sdw3 | sdw3-2 | 51000 |
34 | 8 | m | m | s | u | 50000 | sdw4 | sdw4-2 | 51000 |
38 | 12 | m | m | s | u | 50000 | sdw5 | sdw5-2 | 51000 |
42 | 16 | m | m | s | u | 50000 | sdw6 | sdw6-2 | 51000 |
43 | 17 | m | m | s | u | 50001 | sdw1 | sdw1-3 | 51001 |
47 | 21 | m | m | s | u | 50001 | sdw2 | sdw2-3 | 51001 |
27 | 1 | m | m | s | u | 50001 | sdw3 | sdw3-3 | 51001 |
31 | 5 | m | m | s | u | 50001 | sdw4 | sdw4-3 | 51001 |
35 | 9 | m | m | s | u | 50001 | sdw5 | sdw5-3 | 51001 |
39 | 13 | m | m | s | u | 50001 | sdw6 | sdw6-3 | 51001 |
40 | 14 | m | m | s | u | 50002 | sdw1 | sdw1-4 | 51002 |
44 | 18 | m | m | s | u | 50002 | sdw2 | sdw2-4 | 51002 |
48 | 22 | m | m | s | u | 50002 | sdw3 | sdw3-4 | 51002 |
28 | 2 | m | m | s | u | 50002 | sdw4 | sdw4-4 | 51002 |
32 | 6 | m | m | s | u | 50002 | sdw5 | sdw5-4 | 51002 |
36 | 10 | m | m | s | u | 50002 | sdw6 | sdw6-4 | 51002 |
37 | 11 | m | m | s | u | 50003 | sdw1 | sdw1-1 | 51003 |
41 | 15 | m | m | s | u | 50003 | sdw2 | sdw2-1 | 51003 |
45 | 19 | m | m | s | u | 50003 | sdw3 | sdw3-1 | 51003 |
49 | 23 | m | m | s | u | 50003 | sdw4 | sdw4-1 | 51003 |
29 | 3 | m | m | s | u | 50003 | sdw5 | sdw5-1 | 51003 |
33 | 7 | m | m | s | u | 50003 | sdw6 | sdw6-1 | 51003 |
50 | -1 | m | m | s | u | 5432 | smdw | smdw | |
(50 rows)
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/10867315/viewspace-714283/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 【譯】JavaScript 完整手冊JavaScript
- Go語言快速安裝手冊Go
- Oracle 19c RAC on Linux 7.6安裝手冊OracleLinux
- Oracle GoldenGate Veridata 12.2.1.4安裝配置使用全手冊OracleGo
- Windows環境下達夢資料庫安裝及解除安裝手冊Windows資料庫
- Linux環境下達夢資料庫安裝及解除安裝手冊Linux資料庫
- 從安裝到入門:ElasticSearch 快速學習手冊Elasticsearch
- Centos7安裝MySQL8.0 - 操作手冊CentOSMySql
- 完整安裝always on叢集
- Mysql 5.7 免安裝版windows安裝完整教程MySqlWindows
- BBEdit 註冊碼最新14.6.9+BBEdit 完整安裝教程「相容最新macos14」Mac
- IDM安裝和註冊
- RedHat Advance Server上安裝Oracle 9204 RAC參考手冊(轉)RedhatServerOracle
- MySQL 5.7 Window安裝手冊以及問題方案解決大全MySql
- Java版本安裝完整指南 - marcobehlerJava
- Linux安裝JDK完整步驟LinuxJDK
- Kali Linux 滲透測試手冊(1.1)安裝虛擬機器Linux虛擬機
- wxpython - 快速開發封裝手冊Python封裝
- SourceTree跳過註冊安裝使用
- Xmanager Power Suit 安裝與註冊UI
- Rational ClearQuest 安裝、配置、使用手冊
- Laravel Envoy 安裝到部署完整流程Laravel
- SecureCRT 註冊碼授權啟用 for Mac 最新 附 完整安裝啟用教程 支援m1SecurecrtMac
- 《TensorFlow 機器學習方案手冊》(附 pdf 和完整程式碼)機器學習
- [填坑手冊]小程式Canvas生成海報(一)---完整流程Canvas
- atom-package 安裝緩慢,手動安裝Package
- Hadoop之Hive本地與遠端mysql資料庫管理模式安裝手冊HadoopHiveMySql資料庫模式
- 快速手動安裝 msyql
- Proxifier註冊碼「Proxifier啟用安裝包」
- Nacos 的安裝與服務的註冊
- 手冊
- 靜默方式安裝oracle 11g 完整攻略Oracle
- CentOS安裝MySQL5.5的完整步驟DSITCentOSMySql
- 4.2 K8S超級完整安裝配置K8S
- 在Windows系統中安裝Python【完整流程】WindowsPython
- 完整的 LDAP + phpLDAPadmin安裝部署流程 (ubuntu18.04)LDAPHPUbuntu
- yarn的安裝,並使用yarn安裝vue腳手架YarnVue
- Tigase手動安裝過程
- 手動安裝ROS2ROS