Ceph-0.9.0 原始碼安裝

Michael_DD發表於2015-11-12
Ceph-0.9.0 原始碼安裝


環境:
rhl6.4 64bit

192.168.9.250  mds&monitor
192.168.9.237  osd 
192.168.9.238  osd

官網下載原始碼包

最好不要去github下載,不然會有報錯,下面會有說到

缺少的包下載地址:
1.
2.
3.
第一個比較好用


說明:參考了網上的一個安裝文件,缺啥包之類的基本一樣,按照作業系統下載即可。
實驗安裝用了ceph-0.68.tar.gz   ceph-9.1.0.tar.gz  安裝都沒啥問題。主要初始化的時候有點不一樣。


1.安裝所需要的包
yum install libtool gcc-c++ libuuid-devel keyutils-libs-devel fuse-devel libedit-devel libatomic_ops-devel libaio-devel boost-devel expat-devel

2.編譯ceph原始碼
#tar -xvf ceph-9.0.0.tar.gz
#cd ceph-9.0.0
#./autogen.sh
#./configure --without-tcmalloc

若提示以下錯誤,說明缺少相應依賴包,安裝即可:
***************************************************************************************************
checking whether -lpthread saves the day... yes
checking for uuid_parse in -luuid... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libuuid not found
See `config.log' for more details.
安裝:
#yum install libuuid-devel


***************************************************************************************************
checking for __res_nquery in -lresolv... yes
checking for add_key in -lkeyutils... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libkeyutils not found
See `config.log' for more details.
安裝:
#yum install keyutils-libs-devel


***************************************************************************************************
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE... no
checking for NSS... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no suitable crypto library found
See `config.log' for more details.
安裝(下載的rpm包):
#rpm -ivh cryptopp-5.6.2-2.el6.x86_64.rpm
#rpm -ivh cryptopp-devel-5.6.2-2.el6.x86_64.rpm


***************************************************************************************************
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE...-lcryptopp
checking for NSS... no
configure: using cryptopp for cryptography
checking for FCGX_Init in -lfcgi... no
checking for fuse_main in -lfuse... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no FUSE found (use --without-fuse to disable)
See `config.log' for more details.
安裝:
#yum install fuse-devel


***************************************************************************************************
checking for fuse_main in -lfuse... yes
checking for fuse_getgroups... no
checking jni.h usability... no
checking jni.h presence... no
checking for jni.h... no
checking for LIBEDIT... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: No usable version of libedit found.
See `config.log' for more details.
安裝:
#yum install libedit-devel


***************************************************************************************************
checking for LIBEDIT... yes
checking atomic_ops.h usability... no
checking atomic_ops.h presence... no
checking for atomic_ops.h... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no libatomic-ops found (use --without-libatomic-ops to disable)
See `config.log' for more details.
安裝:
#yum install libatomic_ops-devel 
(也可按提示,使用#./configure --without-tcmalloc --without-libatomic-ops命令遮蔽掉libatomic-ops)


***************************************************************************************************
checking for LIBEDIT... yes
checking for snappy_compress in -lsnappy... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libsnappy not found
See `config.log' for more details.
安裝(下載的rpm包):
#rpm -ivh snappy-1.0.5-1.el6.x86_64.rpm
#rpm -ivh snappy-devel-1.0.5-1.el6.x86_64.rpm


***************************************************************************************************
checking for snappy_compress in -lsnappy... yes
checking for leveldb_open in -lleveldb... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libleveldb not found
See `config.log' for more details.
安裝(下載的rpm包):
#rpm -ivh leveldb-1.7.0-2.el6.x86_64.rpm
#rpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm


***************************************************************************************************
checking for leveldb_open in -lleveldb... yes
checking for io_submit in -laio... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libaio not found
See `config.log' for more details.
安裝:
#yum install libaio-devel


***************************************************************************************************
checking for sys/wait.h that is POSIX.1 compatible... yes
checking boost/spirit/include/classic_core.hpp usability... no
checking boost/spirit/include/classic_core.hpp presence... no
checking for boost/spirit/include/classic_core.hpp... no
checking boost/spirit.hpp usability... no
checking boost/spirit.hpp presence... no
checking for boost/spirit.hpp... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: "Can't find boost spirit headers"
See `config.log' for more details.
安裝:
#yum install boost-devel


***************************************************************************************************
checking if more special flags are requiredfor pthreads... no
checking whether to check for GCC pthread/shared inconsistencies... yes
checking whether -pthread is sufficient with -shared... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating scripts/gtest-config
config.status: creating build-aux/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
見上說明
#./configure --without-tcmalloc命令執行成功,會生成Makefile檔案,接下來正式編譯:


***************************************************************************************************
#make
若過程中報以下錯誤,說明expat-devel沒安裝:
CXX osdmaptool.o
CXXLD osdmaptool
CXX ceph_dencoder-ceph_dencoder.o
test/encoding/ceph_dencoder.cc: In function'int main(int, const char**)':
test/encoding/ceph_dencoder.cc:196: note: variable tracking size limit exceeded with-fvar-tracking-assignments, retrying without
CXX ceph_dencoder-rgw_dencoder.o
In file included from rgw/rgw_dencoder.cc:6:
rgw/rgw_acl_s3.h:9:19: error: expat.h: No such file or directory
In file included from rgw/rgw_acl_s3.h:12,
from rgw/rgw_dencoder.cc:6:
rgw/rgw_xml.h:62: error: 'XML_Parser' does not name a type
make[3]:*** [ceph_dencoder-rgw_dencoder.o] Error1
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: *** [all] Error2
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make: *** [all-recursive] Error 1
安裝:
#yum install expat-devel


***************************************************************************************************
/usr/bin/ld: cannot find -ledit
collect2: ld returned 1 exit status
make[3]: *** [libcephfs.la] Error 1
make[3]: Leaving directory `/root/ceph-9.0.0/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/ceph-9.0.0/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/ceph-9.0.0/src'
make: *** [all-recursive] Error 1


解決/usr/bin/ld: cannot find -lxxx 問題
問題:
在linux環境編譯應用程式或lib的source code時常常會出現如下的錯誤訊息:
/usr/bin/ld: cannot find -lxxx
這些訊息會隨著編譯不同型別的source code 而有不同的結果出來如:
/usr/bin/ld: cannot find -lc
/usr/bin/ld: cannot find -lltdl
/usr/bin/ld: cannot find -lXtst
其中xxx即表示函式庫檔名稱,如上例的:libc.so、libltdl.so、libXtst.so。
其命名規則是:lib+庫名(即xxx)+.so。
會發生這樣的原因有以下三種情形:
1 系統沒有安裝相對應的lib
2 相對應的lib版本不對
3 lib(.so檔)的symbolic link 不正確,沒有連結到正確的函式庫檔案(.so)
解決方法:
(1)先判斷在/usr/lib 下的相對應的函式庫檔案(.so) 的symbolic link 是否正確,若不正確改成正確的連結目標即可解決問題。
(2)若不是symbolic link 的問題引起,而是系統缺少相對應的lib安裝lib即可解決。
(3)如何安裝缺少的lib:
以上面三個錯誤訊息為例:
錯誤1缺少libc的LIB
錯誤2缺少libltdl的LIB
錯誤3缺少libXtst的LIB
先搜尋相對應的LIB再進行安裝的作業如:
apt-cache search libc-dev
apt-cache search libltdl-dev
apt-cache search libXtst-dev
例項:
在進行輸入法gcin的Source Code的編譯時出現以下的錯誤訊息:
/usr/bin/ld: cannot find -lXtst
經檢查後發現是:
lib(.so檔)的symbolic link 不正確
解決方法如下:
cd /usr/lib
ln -s libXtst.so.6 libXtst.so
[root@tomcat ceph-9.0.0]# ln -s /usr/lib64/libedit.so.0.0.27 /usr/lib/libedit.so
如果在/usr/lib的目錄下找不到libXtst.so 檔,那麼就表示系統沒有安裝libXtst的函式庫。
解法如下:
apt-get install libxtst-dev
如果是庫檔案路徑引發的問題,可以到/etc/ld.so.conf.d目錄下,修改其中任意一份conf檔案,
(可以自建conf,以方便識別)將lib所在目錄寫進去,然後在終端輸入 ldconfig 更新快取。




***************************************************************************************************
No rule to make target `erasure-code/jerasure/jerasure/src/cauchy.c
如果直接使用ceph 的src.rpm包,編譯成功,沒有任何錯誤。
但是,如果從github上取ceph的原始碼編譯,則老實遍不成功,報如下錯誤:
No rule to make target `erasure-code/jerasure/jerasure/src/cauchy.c'
網上搜了一把,也有其他人遇到這樣的問題,但都沒有給出解決方案。
仔細研究了一下,錯誤原因原來是這樣的:
ceph 在github上,還有好多的submodules, 如:
src/erasure-code/jerasure/gf-complete
src/erasure-code/jerasure/jerasure
src/libs3
src/rocksdb
git clone 是不會取這下submodule的程式碼的。
而上面編譯ceph遇到的錯誤就是:編譯過程中用到了erasure-code/jerasure/jerasure/src/cauchy.c,
由於沒有git submodule, 所以找不到這個檔案而報錯。


解決方法:
把submodule的程式碼也取下來
git submodule update --init --recursive
至於ceph.src.rpm為什麼能編譯成功,那是因為,所需要的submodule的程式碼已經一起打包,包含在src.rpm裡面了。


***************************************************************************************************
編譯libmongoclient出錯解決辦法
centos g++ 4.4.6 boost 1.41編譯libmongoclient出錯
In file included from /usr/include/boost/thread/future.hpp:12,
                 from /usr/include/boost/thread.hpp:24,
                 from src/mongo/util/file_allocator.cpp:22:
/usr/include/boost/exception_ptr.hpp:43: error: looser throw specifier for 'virtual boost::exception_ptr::~exception_ptr()'
/usr/include/boost/exception/detail/exception_ptr_base.hpp:26: error:   overriding 'virtual boost::exception_detail::exception_ptr_base::~exception_ptr_base() throw ()'
scons: *** [build/mongo/util/file_allocator.o] Error 1
scons: building terminated because of errors.
在/usr/include/boost/exception_ptr.hpp中按如下修改即可。
增加一行:~exception_ptr() throw() { }
include/boost/exception_ptr.hpp
OLD NEW  
90 90            {
91 91            }
92 92  
  93        ~exception_ptr() throw() { }
  94  
93 95        operator unspecified_bool_type() const
94 96            {
95 97            return _empty() ? 0 : &exception_ptr::bad_alloc_;




***************************************************************************************************
CXXLD ceph-dencoder
CXXLD cephfs
CXXLD librados-config
CXXLD ceph-fuse
CCLD rbd-fuse
CCLD mount.ceph
CXXLD rbd
CXXLD rados
CXXLD ceph-syn
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making all in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[1]: Nothing to bedonefor`all'.
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
見上即編譯成功,再安裝ceph即可:




***************************************************************************************************
#make install
libtool: install: ranlib/usr/local/lib/rados-classes/libcls_kvs.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig-n/usr/local/lib/rados-classes
----------------------------------------------------------------------
Libraries have been installed in:
/usr/local/lib/rados-classes


If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool,and
specify the full pathname of the library, or use the`-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to`/etc/ld.so.conf'


See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
test -z "/usr/local/lib/ceph" || /bin/mkdir -p "/usr/local/lib/ceph"
/usr/bin/install -c ceph_common.sh '/usr/local/lib/ceph'
make[4]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making install in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[2]: Entering directory`/cwn/ceph/ceph-0.60/man'
make[2]: Nothing to be done for `install-exec-am'.
test -z "/usr/local/share/man/man8" || /bin/mkdir-p"/usr/local/share/man/man8"
/usr/bin/install-c-m644 ceph-osd.8 ceph-mds.8 ceph-mon.8 mkcephfs.8 ceph-fuse.8 ceph-syn.8 crushtool.8 osdmaptool.8 monmaptool.8 ceph-conf.8 ceph-run.8 ceph.8 mount.ceph.8 radosgw.8 radosgw-admin.8 ceph-authtool.8 rados.8 librados-config.8 rbd.8 ceph-clsinfo.8 ceph-debugpack.8 cephfs.8 ceph-dencoder.8 ceph-rbdnamer.8 rbd-fuse.8'/usr/local/share/man/man8'
make[2]: Leaving directory`/cwn/ceph/ceph-0.60/man'
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
到此,ceph的編譯安裝全部成功。




***************************************************************************************************
3. 配置ceph
除客戶端外,其它的節點都需一個配置檔案ceph.conf,並需要是完全一樣的。這個檔案要位於/etc/ceph下面,
如果在./configure時沒有修改prefix的話,則應該是在/usr/local/etc/ceph下。
#cp ./src/sample.* /usr/local/etc/ceph/
#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf
#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config
#cp ./src/init-ceph /etc/init.d/ceph
#mkdir /var/log/ceph //存放log,現在ceph自己還不自動建這個目錄
注:
①部署每臺伺服器,主要修改的就是/usr/local/etc/ceph/下的兩個檔案ceph.conf(ceph叢集配置檔案)和
fetch_config(同步指令碼,用於同步各節點的ceph.conf檔案,具體方法是scp遠端複製,但我發現沒啥用,所以後來自行寫了個指令碼)。
②針對osd,除了要載入btrfs模組,還要安裝btrfs-progs(#yum install btrfs-progs),這樣才有mkfs.btrfs命令。
另外就是要在資料節點osd上建立分割槽或邏輯卷供ceph使用:可以是磁碟分割槽(如/dev/sda2),也可以是邏輯卷(如/dev/mapper/VolGroup-lv_ceph),
只要與配置檔案ceph.conf中寫的一致即可。具體建立分割槽或邏輯卷的方法請自行google。

把ceph的bin目錄加到profile檔案中

[root@ceph_mds ceph]# cat /usr/local/etc/ceph/ceph.conf
;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the init.d start/stop script will
; verify that it matches the hostname (or else ignore it). If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).

; The variables $type, $id and $name are available to usein paths
; $type = The type of daemon, possible values: mon, mdsand osd
; $id = The ID of the daemon, for mon.alpha, $id will be alpha
; $name = $type.$id

; For example:
; osd.0
; $type = osd
; $id = 0
; $name = osd.0

; mon.beta
; $type = mon
; $id = beta
; $name = mon.beta

; global
[global]
; enable secure authentication
; auth supported = cephx

; allow ourselves to open a lot of files
max open files = 131072

; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog

; set up pid files
pid file = /var/run/ceph/$name.pid
; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id
; If you are using for example the RADOS Gateway and want to have your newly created
; pools a higher replication level, you can set a default
;osd pool default size = 3
; You can also specify a CRUSH rule for new pools
; Wiki:
;osd pool default crush rule = 0

; Timing is critical for monitors, but if you want to allow the clocks to drift a
; bit more, you can specify the max drift.
;mon clock drift allowed = 1

; Tell the monitor to backoff from this warning for 30 seconds
;mon clock drift warn backoff = 30

; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20

[mon.0]
host = ceph
mon addr = 192.168.9.245:6789


; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /data/keyring.$name
; mds logging to debug issues.
;debug ms = 1
;debug mds = 20

[mds.0]
host = ceph
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.

[osd]
sudo = true
; This is where the osd expects its data
osd data = /data/osd$id

; Ideally, make the journal a separate disk or partition.
; 1-10GB should be enough; moreif you have fastor many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/$name/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /data/$name/journal
osd journal size = 1000 ; journal size, in megabytes

; If you want to run the journal on a tmpfs (don't), disable DirectIO
;journal dio = false

; You can change the number of recovery operations to speed up recovery
; or slow it down if your machines can't handle it
; osd recovery max active = 3

; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20

; ### The below options only apply if you're using mkcephfs
; ### and the devs options
; The filesystem used on the volumes
osd mkfs type = btrfs
; If you want to specify some other mount options, you can do so.
; for other filesystems use 'osd mount options $fstype'
osd mount options btrfs = rw,noatime
; The options used to format the filesystem via mkfs.$fstype
; for other filesystems use 'osd mkfs options $fstype'
; osd mkfs options btrfs =

[osd.0]
host = ceph1
btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.1]
host = ceph2
btrfs devs = /dev/mapper/VolGroup-lv_ceph



***************************************************************************************************
4. 配置網路
① 修改各節點的hostname,並能夠透過hostname來互相訪問
參考:
修改/etc/sysconfig/network檔案以重定義自己的hostname;
修改/etc/hosts檔案以標識其他節點的hostname與IP的對映關係;
重啟主機後用hostname命令檢視驗證。
② 各節點能夠ssh互相訪問而不輸入密碼
原理就是公私鑰機制,我要訪問別人,那麼就要把自己的公鑰先發給別人,對方就能透過我的公鑰驗證我的身份。
例:在甲節點執行如下命令
#ssh-keygen --d
該命令會在"~/.ssh"下面生成幾個檔案,這裡有用的是id_dsa.pub,為該節點(甲)的公鑰,然後把裡面的內容新增到對方節點(乙) 
"~/.ssh/"目錄下的authorized_keys檔案中,如果沒有則建立一個,這樣就從甲節點不需要密碼ssh登陸到乙上了.


***************************************************************************************************
[root@ceph ~]# ceph -s
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 94, in
    import rados
ImportError: No module named rados




[root@ceph pybind]# cp /root/ceph-9.1.0/src/pybind/* /usr/local/bin/


[root@ceph bin]# ceph -s
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 919, in
    retval = main()
  File "/usr/local/bin/ceph", line 665, in main
    conffile=conffile)
  File "/usr/local/bin/rados.py", line 259, in validate_func
    return f(*args, **kwargs)
  File "/usr/local/bin/rados.py", line 284, in __init__
    self.librados = CDLL(library_path if library_path is not None else 'librados.so.2')
  File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: librados.so.2: cannot open shared object file: No such file or directory


缺少動態連線庫.so--cannot open shared object file: No such file or directory
總結下來主要有3種方法:
1. 用ln將需要的so檔案連結到/usr/lib或者/lib這兩個預設的目錄下邊
ln -s /where/you/install/lib/*.so /usr/lib
sudo ldconfig
2.修改LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/where/you/install/lib:$LD_LIBRARY_PATH
sudo ldconfig
3.修改/etc/ld.so.conf,然後重新整理
vim /etc/ld.so.conf
add /where/you/install/lib
sudo ldconfig


[root@ceph bin]# whereis librados.so.2
librados.so: /usr/local/lib/librados.so.2 /usr/local/lib/librados.so
[root@ceph bin]# ln -s /usr/local/lib/librados.so.2 /usr/lib


***************************************************************************************************


***************************************************************************************************
5. 建立檔案系統並啟動。以下命令在監控節點進行!
#mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
遇以下問題:
(1)scp: /etc/ceph/ceph.conf: No such file or directory
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
[/usr/local/etc/ceph/fetch_config/tmp/fetched.ceph.conf.2693]
The authenticity of host 'ceph_mds (127.0.0.1)' can't be established.
RSA key fingerprint is a7:c8:b8:2e:86:ea:89:ff:11:93:e9:29:68:b5:7c:11.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph_mds' (RSA) to the list of known hosts.
ceph.conf 100% 4436 4.3KB/s 00:00 
temp dir is /tmp/mkcephfs.tIHQnX8vkw
preparing monmap in /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool --create --clobber --add 0 222.31.76.178:6789 --print /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: monmap file /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: generated fsid f998ee83-9eba-4de2-94e3-14f235ef840c
epoch 0
fsid f998ee83-9eba-4de2-94e3-14f235ef840c
last_changed 2013-05-31 08:22:52.972189
created 2013-05-31 08:22:52.972189
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.tIHQnX8vkw/monmap (1 monitors)
=== osd.0 === 
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.0b3c65941572123eb704d9d614411fc1
scp: /etc/ceph/ceph.conf: No such file or directory




***************************************************************************************************
在ceph9.1.0中,已不用這種方式初始化節點了,詳細配置請檢視官方文件
版本0.80.5前後的安裝方式不一樣。





***************************************************************************************************
解決:編寫一個指令碼,將配置檔案同步到/etc/ceph和/usr/local/etc/ceph目錄下(需手動先建立/etc/ceph目錄):
[root@ceph_mds ceph]# cat cp_ceph_conf.sh
cp /usr/local/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/etc/ceph/ceph.conf
(2)
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.hz1EcPJjtu
preparing monmap in /tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: generated fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
epoch 0
fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
last_changed 2013-05-3108:39:48.198656
created 2013-05-3108:39:48.198656
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.hz1EcPJjtu/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.2e991ed41f1cdca1149725615a96d0be
umount: /data/osd0:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 12:39:04.073438 7f02cd9ac760 -1 filestore(/data/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-3112:39:04.3620107f02cd9ac760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 12:39:04.362074 7f02cd9ac760 -1 auth: error reading file: /data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:39:04.362280 7f02cd9ac760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key
=== osd.1 === 
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.9a9f67ff6e7516b415d30f0a89bfe0dd
umount: /data/osd1: not mounted
umount: /dev/mapper/VolGroup-lv_ceph: not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:39:13.237718 7ff0a2fe4760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:39:13.524175 7ff0a2fe4760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 08:39:13.524241 7ff0a2fe4760 -1 auth: error reading file: /data/osd1/keyring: can't open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:39:13.5244307ff0a2fe4760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.51a8af4b24b311fcc2d47eed2cd714ca
umount: /data/osd2:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:01:49.3718537ff422eb1760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:01:49.5830617ff422eb1760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-3109:01:49.5831237ff422eb1760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:01:49.583312 7ff422eb1760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha === 
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
bufferlist::write_file(/data/keyring.mds.alpha): failed to open file: (2) No such file or directory
could not write /data/keyring.mds.alpha
can't open /data/keyring.mds.alpha: can't open /data/keyring.mds.alpha: (2) No such file or directory
failed: '/usr/local/sbin/mkcephfs -d /tmp/mkcephfs.hz1EcPJjtu --init-daemon mds.alpha'
解決:手動建立這個檔案:
#mkdir /data
#touch /data/keyring.mds.alpha
【建立成功】
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.v9vb0zOmJ5
preparing monmap in /tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: generated fsid652b09fb-bbbf-424c-bd49-8218d75465ba
epoch 0
fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
last_changed 2013-05-3108:50:21.797571
created 2013-05-3108:50:21.797571
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.v9vb0zOmJ5/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.8912ed2e34cfd2477c2549354c03faa3
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3112:49:36.5483297f67d293e760-1 journal check: ondisk fsid919417f1-0a79-4463-903c-3fc9df8ca0f8 doesn't match expected 3b3d2772-4981-46fd-bbcd-b11957c77d47, invalid (someone else's?) journal
2013-05-3112:49:36.9536667f67d293e760-1 filestore(/data/osd0) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3112:49:37.2443347f67d293e760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3112:49:37.2443977f67d293e760-1 auth: error reading file:/data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:49:37.244580 7f67d293e760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key
=== osd.1 === 
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.69d388555243635efea3c5976d001b64
umount: /dev/mapper/VolGroup-lv_ceph: not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:49:45.012858 7f82a3d52760 -1 journal check: ondisk fsid 28f23b77-6f77-47b3-b946-7eda652d4488 doesn't match expected65a75a4f-b639-4eab-91d6-00c985118862, invalid (someone else's?) journal
2013-05-31 08:49:45.407962 7f82a3d52760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:49:45.696990 7f82a3d52760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-31 08:49:45.697052 7f82a3d52760 -1 auth: error reading file: /data/osd1/keyring: can't open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:49:45.6972387f82a3d52760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.686b9d63c840a05a6eed5b5781f10b27
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:12:20.7087337fa54ae8f760-1 journal check: ondisk fsid dc21285e-3bde-4f53-9424-d059540ab920 doesn't match expected cae83f10-d633-48d1-b324-a64849eca974, invalid (someone else's?) journal
2013-05-3109:12:21.0571547fa54ae8f760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:12:21.2536897fa54ae8f760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3109:12:21.2537497fa54ae8f760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:12:21.253931 7fa54ae8f760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha === 
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
Building generic osdmap from /tmp/mkcephfs.v9vb0zOmJ5/conf
/usr/local/bin/osdmaptool: osdmap file '/tmp/mkcephfs.v9vb0zOmJ5/osdmap'
/usr/local/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.v9vb0zOmJ5/osdmap
Generating admin key at /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
creating /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
Building initial monitor keyring
added entity mds.alpha auth auth(auid = 18446744073709551615 key=AQCXnKhRiL/QHhAA091/MQGD25V54smKBz959w== with 0 caps)
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDhK6hROEuRDhAA9uCsjB++Szh8sJy3CUgeoA== with 0 caps)
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBpnKhR0EKMKRAAzNWvZgkDWrPSuZaSttBdsw== with 0 caps)
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQC1oahReP4fDxAAR0R0HTNfVbs6VMybLIU9qg== with 0 caps)
=== mon.0 === 
/usr/local/bin/ceph-mon: created monfs at /data/mon0 for mon.0
placing client.admin keyring in /etc/ceph/keyring






***************************************************************************************************
【啟動】
#/etc/init.d/ceph -a start  //必要時先關閉防火牆(#service iptables stop)
[root@ceph_mds ceph]# /etc/init.d/ceph -a start
=== mon.0===
Starting Ceph mon.0 on ceph_mds...
starting mon.0 rank 0 at 222.31.76.178:6789/0 mon_data /data/mon0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
=== mds.alpha===
Starting Ceph mds.alpha on ceph_mds...
starting mds.alpha at :/0
=== osd.0===
Mounting Btrfs on ceph_osd0:/data/osd0
Scanning for Btrfs filesystems
Starting Ceph osd.0 on ceph_osd0...
starting osd.0 at :/0 osd_data/data/osd0/data/osd.0/journal
=== osd.1===
Mounting Btrfs on ceph_osd1:/data/osd1
Scanning for Btrfs filesystems
Starting Ceph osd.1 on ceph_osd1...
starting osd.1 at :/0 osd_data/data/osd1/data/osd.1/journal
=== osd.2===
Mounting Btrfs on ceph_osd2:/data/osd2
Scanning for Btrfs filesystems
Starting Ceph osd.2 on ceph_osd2...
starting osd.2 at :/0 osd_data/data/osd2/data/osd.2/journal


***************************************************************************************************
【檢視Ceph叢集狀態】
[root@ceph_mds ceph]# ceph -s
health HEALTH_OK
monmap e1: 1 mons at {0=222.31.76.178:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 3 osds:3 up,3in
pgmap v432: 768 pgs:768 active+clean;9518 bytes data, 16876 KB used,293 GB/300 GB avail
mdsmap e4: 1/1/1 up {0=alpha=up:active}
[root@ceph_mds ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED 
300M 293M 16876 0 


POOLS:
NAME ID USED %USED OBJECTS 
data 0 0 0 0 
metadata 1 9518021
rbd 2 0 0 0
疑問:空間統計有問題吧?!"ceph -s"檢視是300GB,"ceph df"檢視是300M。


***************************************************************************************************
6. 客戶端掛載
#mkdir /mnt/ceph
#mount -t ceph ceph_mds:/ /mnt/ceph
遇以下錯誤:
(1)
[root@localhost ~]# mount -t ceph ceph_mds:/ /mnt/ceph/
mount: wrong fs type, bad option, bad superblock on ceph_mds:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)
In some cases useful info is found in syslog- try
dmesg | tail or so
檢視#dmesg
ceph: Unknown symbol ceph_con_keepalive (err0)
ceph: Unknown symbol ceph_create_client (err 0)
ceph: Unknown symbol ceph_calc_pg_primary (err0)
ceph: Unknown symbol ceph_osdc_release_request (err0)
ceph: Unknown symbol ceph_con_open (err 0)
ceph: Unknown symbol ceph_flags_to_mode (err 0)
ceph: Unknown symbol ceph_msg_last_put (err 0)
ceph: Unknown symbol ceph_caps_for_mode (err 0)
ceph: Unknown symbol ceph_copy_page_vector_to_user (err0)
ceph: Unknown symbol ceph_msg_new (err 0)
ceph: Unknown symbol ceph_msg_type_name (err 0)
ceph: Unknown symbol ceph_pagelist_truncate (err0)
ceph: Unknown symbol ceph_release_page_vector (err0)
ceph: Unknown symbol ceph_check_fsid (err 0)
ceph: Unknown symbol ceph_pagelist_reserve (err0)
ceph: Unknown symbol ceph_pagelist_append (err0)
ceph: Unknown symbol ceph_calc_object_layout (err0)
ceph: Unknown symbol ceph_get_direct_page_vector (err0)
ceph: Unknown symbol ceph_osdc_wait_request (err0)
ceph: Unknown symbol ceph_osdc_new_request (err0)
ceph: Unknown symbol ceph_pagelist_set_cursor (err0)
ceph: Unknown symbol ceph_calc_file_object_mapping (err0)
ceph: Unknown symbol ceph_monc_got_mdsmap (err0)
ceph: Unknown symbol ceph_osdc_readpages (err 0)
ceph: Unknown symbol ceph_con_send (err 0)
ceph: Unknown symbol ceph_zero_page_vector_range (err0)
ceph: Unknown symbol ceph_osdc_start_request (err0)
ceph: Unknown symbol ceph_compare_options (err0)
ceph: Unknown symbol ceph_msg_dump (err 0)
ceph: Unknown symbol ceph_buffer_new (err 0)
ceph: Unknown symbol ceph_put_page_vector (err0)
ceph: Unknown symbol ceph_pagelist_release (err0)
ceph: Unknown symbol ceph_osdc_sync (err 0)
ceph: Unknown symbol ceph_destroy_client (err 0)
ceph: Unknown symbol ceph_copy_user_to_page_vector (err0)
ceph: Unknown symbol __ceph_open_session (err 0)
ceph: Unknown symbol ceph_alloc_page_vector (err0)
ceph: Unknown symbol ceph_monc_do_statfs (err 0)
ceph: Unknown symbol ceph_monc_validate_auth (err0)
ceph: Unknown symbol ceph_osdc_writepages (err0)
ceph: Unknown symbol ceph_parse_options (err 0)
ceph: Unknown symbol ceph_str_hash (err 0)
ceph: Unknown symbol ceph_pr_addr (err 0)
ceph: Unknown symbol ceph_buffer_release (err 0)
ceph: Unknown symbol ceph_con_init (err 0)
ceph: Unknown symbol ceph_destroy_options (err0)
ceph: Unknown symbol ceph_con_close (err 0)
ceph: Unknown symbol ceph_msgr_flush (err 0)
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
我發現客戶端mount命令根本沒有ceph型別(無"mount.ceph"),而我們配置的其他節點都有mount.ceph,所以我在ceph客戶端上也重新編譯了最新版的ceph-0.60。
(2)編譯安裝ceph-0.60後mount還是報同樣的錯,檢視dmesg
#dmesg | tail
Key type ceph unregistered
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
libceph: no secret set (for auth_x protocol)
libceph: error -22 on auth protocol 2 init
libceph: client4102 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba


最終查明原因,是因為mount時還需要輸入使用者名稱和金鑰,具體mount命令為:
#mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
[root@localhost ~]# mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
parsing options: name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
上述命令中的name和secret引數值來自monitor的/etc/ceph/keyring檔案:
[root@ceph_mds ceph]# cat /etc/ceph/keyring
[client.admin]
key = AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
注:
1. To mount the Ceph file system you may use the mount command if you know the monitor host IP address(es), or use the mount.ceph utility to resolve the monitor host name(s) into IP address(es) for you.
2. mount引數
    -v, --verbose:Verbose mode.
    -o, --options opts:Options are specified with a -o flag followed by a comma separated string of options.
3. mount.ceph參考:


***************************************************************************************************
檢視客戶端的掛載情況:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 13G 35G 27%/
tmpfs 2.0G 02.0G0%/dev/shm
/dev/sda1 477M 48M 405M11%/boot
/dev/mapper/VolGroup-lv_home
405G 71M 385G 1%/home
222.31.76.178:/ 300G 6.1G 294G 3% /mnt/ceph


P.S. 網上說若不想每次輸入金鑰這麼繁瑣,可以在配置檔案ceph.conf中加入以下欄位(並記得同步到其他節點),但我實驗發現還是無效,所以暫且採用上述方法掛載使用,有哪位朋友知道我錯在哪歡迎指出啊。
[mount /] 
     allow = %everyone
【解決】
檢視了官網文件,發現真正要取消掛載時的認證,需要在配置檔案[global]下加入以下內容:
對於0.51及以上版本:
auth cluster required = none
auth service required = none
auth client required = none
對於0.50及以下版本:
auth supported = none
官方注:If your cluster environment is relatively safe, you can offset the computation expense of running authentication. We do not recommend it. However, it may be easier during setup and/or troubleshooting to temporarily disable authentication.

***************************************************************************************************

到此Ceph的安裝配置就已全部完成,可以在客戶端的/mnt/ceph目錄下使用Ceph分散式檔案系統。
近期我還將進行Ceph的功能性驗證測試,測試報告敬請期待!

如果用el6系統,編譯9.1.0的時候會出現很多程式碼庫的問題,儘量不要使用el6系統






來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29500582/viewspace-1831377/,如需轉載,請註明出處,否則將追究法律責任。

相關文章