vmware server 1.0.6+raid+lvm(一)
lvm及raid混搭是現今儲存管理方式的主要解決方案。筆者參考網上一些貼子及查閱官方文件,初步先構建了raid1+lvm,實現了基本的功能。
如有不熟悉lvm,可以參考如下連結文章:
概述:我的測試皆在vmware server 1.0.6,os採用centos 5.4
開始構建,步驟大體如下:
1,採用mdadm配置raid1裝置,如果大家對mdadm使用不熟悉,請參考這幾篇文章:
http://space.itpub.net/9240380/viewspace-630880
http://space.itpub.net/9240380/viewspace-630895
[root@localhost dev]# !223
mdadm --create /dev/md8 -l1 -n2 /dev/sdd1 /dev/sdb1
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
Continue creating array? yes
mdadm: array /dev/md8 started.
[root@localhost dev]# mdadm --detail --scan ---此命令掃描下以上構建的raid裝置的相關配置資訊
ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=8a87573f:f0802515:f9d540fc:c4782e7c
[root@localhost ~]# vgremove vg01 --執行這個是因為以前在os上配置過vg了,所以先要刪除以前的vg配置
/dev/cdrom: open failed: Read-only file system
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed
2,利用lvm構建pv,vg,lv
[root@localhost ~]# pvcreate /dev/md8 ---
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Physical volume "/dev/md8" successfully created
[root@localhost ~]# vgcreate vg1 /dev/md8
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Volume group "vg1" successfully created
[root@localhost ~]# lvcreate -L 2.5GB -n lv1 vg1
Logical volume "lv1" created
[root@localhost ~]# lvscan
ACTIVE '/dev/vg1/lv1' [2.50 GB] inherit
[root@localhost ~]# pvscan
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
PV /dev/md8 VG vg1 lvm2 [2.99 GB / 504.00 MB free]
Total: 1 [2.99 GB] / in use: 1 [2.99 GB] / in no VG: 0 [0 ]
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Found volume group "vg1" using metadata type lvm2
3,運地mke2fs,配置對應的ext3檔案系統(加j otion),並載入檔案系統到cento5.4
[root@localhost ~]# mke2fs -j /dev/vg1/lv1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
327680 inodes, 655360 blocks
32768 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=671088640
20 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir -pv /lun
[root@localhost ~]# mount /dev/vg1/lv1 /lun
[root@localhost ~]# df -hk --檢視lvm分割槽的mount狀況
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 17856888 6497720 10437444 39% /
tmpfs 216476 0 216476 0% /dev/shm
none 216388 104 216284 1% /var/lib/xenstored
/dev/hdc 3906842 3906842 0 100% /media/CentOS_5.4_Final
/dev/mapper/vg1-lv1 2580272 69448 2379752 3% /lun
4,為了下次重啟自動掛載lvm的分割槽
[root@localhost ~]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-hda1 swap swap defaults 0 0
/dev/vg1/lv1 /lun ext3 defaults 2 2
~
~
如有不熟悉lvm,可以參考如下連結文章:
概述:我的測試皆在vmware server 1.0.6,os採用centos 5.4
開始構建,步驟大體如下:
1,採用mdadm配置raid1裝置,如果大家對mdadm使用不熟悉,請參考這幾篇文章:
http://space.itpub.net/9240380/viewspace-630880
http://space.itpub.net/9240380/viewspace-630895
[root@localhost dev]# !223
mdadm --create /dev/md8 -l1 -n2 /dev/sdd1 /dev/sdb1
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
Continue creating array? yes
mdadm: array /dev/md8 started.
[root@localhost dev]# mdadm --detail --scan ---此命令掃描下以上構建的raid裝置的相關配置資訊
ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=8a87573f:f0802515:f9d540fc:c4782e7c
[root@localhost ~]# vgremove vg01 --執行這個是因為以前在os上配置過vg了,所以先要刪除以前的vg配置
/dev/cdrom: open failed: Read-only file system
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed
2,利用lvm構建pv,vg,lv
[root@localhost ~]# pvcreate /dev/md8 ---
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Physical volume "/dev/md8" successfully created
[root@localhost ~]# vgcreate vg1 /dev/md8
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Volume group "vg1" successfully created
[root@localhost ~]# lvcreate -L 2.5GB -n lv1 vg1
Logical volume "lv1" created
[root@localhost ~]# lvscan
ACTIVE '/dev/vg1/lv1' [2.50 GB] inherit
[root@localhost ~]# pvscan
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
PV /dev/md8 VG vg1 lvm2 [2.99 GB / 504.00 MB free]
Total: 1 [2.99 GB] / in use: 1 [2.99 GB] / in no VG: 0 [0 ]
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Found volume group "vg1" using metadata type lvm2
3,運地mke2fs,配置對應的ext3檔案系統(加j otion),並載入檔案系統到cento5.4
[root@localhost ~]# mke2fs -j /dev/vg1/lv1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
327680 inodes, 655360 blocks
32768 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=671088640
20 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir -pv /lun
[root@localhost ~]# mount /dev/vg1/lv1 /lun
[root@localhost ~]# df -hk --檢視lvm分割槽的mount狀況
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 17856888 6497720 10437444 39% /
tmpfs 216476 0 216476 0% /dev/shm
none 216388 104 216284 1% /var/lib/xenstored
/dev/hdc 3906842 3906842 0 100% /media/CentOS_5.4_Final
/dev/mapper/vg1-lv1 2580272 69448 2379752 3% /lun
4,為了下次重啟自動掛載lvm的分割槽
[root@localhost ~]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-hda1 swap swap defaults 0 0
/dev/vg1/lv1 /lun ext3 defaults 2 2
~
~
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/9240380/viewspace-630922/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Vmware SERVER 簡介Server
- VMware Server 下載Server
- VMware Server和Linux 5安VMware ToolsServerLinux
- boot VMware server no found local?bootServer
- VMWare Server: 511 vmware-serverd service is not runningServer
- vmware server 1.0.6----vmware-diskmanager配置rac共享儲存Server
- 使用Virtual Infrastructure Client 管理 VMWare ServerASTStructclientServer
- VMware 安裝 Ubuntu Server 18.04.5 LTSUbuntuServer
- vmware server 增加根分割槽的方法Server
- VMware Server Web介面無法開啟ServerWeb
- vmware server 啟動到95%就不動了Server
- vmware server 解決rac中共享磁碟問題Server
- vmware server web access的login name和passwordServerWeb
- VMware Server Console連線不上Server
- vmware server啟動時503錯誤解決Server
- VMWARE ESX SERVER虛擬化資料恢復Server資料恢復
- 【VMware vCenter】VMware vCenter Server(VCSA) 5.5 版本證書過期問題處理過程。Server
- win7下Vmware server的vmware host agent無法啟動的解決辦法Win7Server
- 【VMWARE】vCenter Server 日誌檔案的位置及組成Server
- 解決不能訪問 VMware Server Web Access 的問題ServerWeb
- VMWare Server 2.0 的使用者名稱和密碼Server密碼
- vmware gsx server正確設定共享磁碟注意事項Server
- Asianux server sp3下的vmware-tools設定UXServer
- 利用vmware server 1.0.6複製solaris10節點出錯Server
- 【VMware vCenter】使用Reduced Downtime Update (RDU)升級更新vCenter Server。Server
- VMWARE ESX SERVER虛擬化資料恢復過程總結Server資料恢復
- VMware 虛擬機器圖文安裝和配置 Ubuntu Server 22.04虛擬機UbuntuServer
- Ubuntu Server 6.06在VMware下無法啟動問題(轉)UbuntuServer
- Oracle Database 11g Release 2 RAC On Linux Using VMware Server 2OracleDatabaseLinuxServer
- 【VMware vCenter】連線和使用vCenter Server嵌入式vPostgres資料庫。Server資料庫
- VMware vCenter Server 7.0U3s 釋出下載,新增功能概覽Server
- 【VMware vCenter】使用vSphere Diagnostic Tool(VDT)診斷工具檢查vCenter Server。Server
- VMware vCenter Server 8.0U3b 釋出下載,新增功能概覽Server
- VMware ESXI磁碟下載虛擬機器遷移到另一臺VMware ESXI虛擬機
- 【虛擬機器資料恢復】VMware ESX SERVER資料恢復案例虛擬機資料恢復Server
- 【Ubuntu】Win11 VmWare虛擬機器安裝Ubuntu 22.04.1-serverUbuntu虛擬機Server
- tableau必知必會之VMware 搭建 Tableau Server for Linux 單機環境ServerLinux
- vmware server 1.0.6死活網路(guest os<rhel4>)與host windows xp不通ServerWindows