nfs搭建和可能的問題

餘二五發表於2017-11-21

http://www.cnblogs.com/hackerer/p/5221556.html

挺不錯的參考文件


NFS伺服器搭建:

伺服器IP: 10.135.152.241

yum install rpcbind nfs-utils

# cat /etc/exports

/nfs_data 10.104.71.154(rw,no_root_squash,no_all_squash,sync)

#/nfs_data 172.16.1.0/24(rw,sync,all_squash)

# mkdir /nfs_data

# chown -R nfsnobody.nfsnobody /nfs_data

說明: 該使用者在安裝nfs時就建立好了

# /etc/init.d/rpcbind start

# /etc/init.d/nfs start

# /etc/init.d/nfs status/reload/stop/restart

啟動命令加到/etc/rc.local

/etc/init.d/rpcbind start

/etc/init.d/nfs start


啟動nfs可能會遇到的問題:

# /etc/init.d/nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused

rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp).

                                                           [FAILED]

Starting NFS mountd:                                       [FAILED]

Starting NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)

rpc.nfsd: unable to set any sockets for nfsd

                                                           [FAILED]

原因:

RH系作業系統在6.0版本號之後沒有portmap服務控制rpc的啟動。因為NFS和nfslock的啟動須要向rpc進行註冊,rpc不啟動的話就會報錯。

解決方法:啟動rpcbind&rpcidmap rpcbind是6.0版本號後預設的RPC服務,所以要先於nfs啟動。假設不啟動rpcidmap則會造成使用者許可權的對映錯誤,使用者的許可權看起來是一串數字。

# /etc/init.d/rpcbind start

Starting rpcbind:                                          [  OK  ]

# /etc/init.d/rpcidmapd start

Starting RPC idmapd:                                       [  OK  ]

# /etc/init.d/nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas:                                       [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]


# cat /var/lib/nfs/etab

/data001/data/sites/imgdsp.100msh.com 10.104.71.154(rw,sync,wdelay,hide,nocrossnfs_data,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534,sec=sys,rw,no_root_squash,no_all_squash)

/data001/data/sites/imgdsp.100msh.com 10.104.35.202(rw,sync,wdelay,hide,nocrossnfs_data,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534,sec=sys,rw,no_root_squash,no_all_squash)

10.135.152.241是NFS伺服器地址

# showmount -e 10.135.152.241 <==掛載前首先檢查有許可權需要掛載的資訊

Export list for 10.135.152.241:

/data 10.135.152.241/24 <—可以看到共享/data目錄

本機做掛載測試

# mount -t nfs 10.135.152.241:/data /mnt  ##將/data共享目錄掛載到本地的/mnt目錄

# df -h

Filesystem          Size  Used Avail Use% Mounted on

/dev/sda3           7.1G  1.5G  5.3G  22% /

tmpfs               279M     0  279M   0% /dev/shm

/dev/sda1           190M   36M  145M  20% /boot

10.135.152.241:/data  7.1G  1.5G  5.3G  22% /mnt




掛載nfs

# yum -y install nfs-utils rpcbind  

# /etc/init.d/rpcbind start

Starting rpcbind:                                          [  OK  ]

mount -t nfs 10.135.152.241:/nfs_data /nfs_data

可能會遇到的問題:

mount: wrong fs type, bad option, bad superblock on 10.135.152.241:/data/img,

       missing codepage or helper program, or other error

       (for several filesystems (e.g. nfs, cifs) you might

       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog – try

       dmesg | tail  or so

       

原因:

[root@web data]# ll /sbin/mount*

檢視/sbin/mount.<type>檔案,果然發現沒有/sbin/mount.nfs的檔案,安裝nfs-utils即可

解決:

yum install nfs-utils


掛載成功後測試共享目錄的讀,寫



mount掛載效能優化引數選項

(1)禁止更新目錄及檔案時間戳掛載

mount -t nfs -o noatime,nodiratime 10.135.152.241:/data

(2)安全加優化的掛載方式

mount -t nfs -o nosuid,noexec,nodev,noatime,nodiratime,intr,rsize=131072,wsize=131072 10.135.152.241:/nfs_data /mnt

(3)預設的掛載方式

mount -t nfs 10.135.152.241:/nfs_data /mnt

NFS核心優化

編輯/etc/sysctl.conf

net.core.wmem_default = 8388608

net.core.rmem_default = 8388608

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

# sysctl -p



其他問題:

【LINUX】在redhat6系統中 當NFS啟動後 rpc.svcgssd 狀態還是STOP

    [root@mytest Packages]# cat /etc/redhat-release

    Red Hat Enterprise Linux Server release 6.7 (Santiago)

    [root@mytest Packages]# service rpcbind status

    rpcbind (pid 4744) is running…

    [root@mytest Packages]# service nfs status

    rpc.svcgssd is stopped    — 問題所在

    rpc.mountd (pid 5733) is running…

    nfsd (pid 5749 5748 5747 5746 5745 5744 5743 5742) is running…

    rpc.rquotad (pid 5728) is running…

描述:如果NFS 配置為Kerberos 自動共享模式,該服務才會有用或者啟動

原文如下:

This is an expected behaviour. rpc.svcgssd and rpc.gssd daemons only needs to be enabled if NFS is configured to to export shares via Kerberos authentication

NFS 預設是不配置Kerberos 共享 的

NFS service by default is not configured to export shares via Kerberos

 Kerberos協議:(此處解釋轉載自http://www.jb51.net/article/94875.htm)

Kerberos協議主要用於計算機網路的身份鑑別(Authentication), 其特點是使用者只需輸入一次身份驗證資訊就可以憑藉此驗證獲得的票據(ticket-granting ticket)訪問多個服務,即SSO(Single Sign On)。由於在每個Client和Service之間建立了共享金鑰,使得該協議具有相當的安全性。



在client上執行:

showmount -e cloud.squirrel.org

clnt_create: RPC: Port mapper failure – Unable to receive: errno 111 (Connection refused)

showmount -e 192.168.205.129

Export list for 192.168.205.129:

/export *

mount -t nfs cloud.squirrel.org:/export/primary /primarymount

mount.nfs: Connection timed out

mount -t nfs 192.168.205.129:/export/primary /primarymount

mount.nfs: access denied by server while mounting 192.168.205.129:/export/primary

到server去tail -200 /var/log/messages:

refused mount request from 192.168.205.1 for /export/primary (/export): illegal port 1024

已經快接近成功了,google了一下,說是server上nfs要加上一個insecure:

gedit /etc/exports

在原來的那一行上修改如下:

/export *(rw,async,insecure,no_root_squash)

然後:

exportfs -rv

service nfs restart

在client上重新

mount -t nfs 192.168.205.129:/export/primary /primarymount

沒有提示,應該成功了,可以確認:

mount |grep primary

OK!

本文轉自 zhuhc1988 51CTO部落格,原文連結:http://blog.51cto.com/changeflyhigh/1953692,如需轉載請自行聯絡原作者


相關文章