寫在前面
最近在 K8S 1.18.2 版本的叢集上搭建DevOps環境,期間遇到了各種坑。目前,搭建環境的過程中出現的各種坑均已被填平,特此記錄,並分享給大家!
文章和搭建環境所需要的yml檔案已收錄到:https://github.com/sunshinelyz/technology-binghe 和 https://gitee.com/binghe001/technology-binghe 。如果檔案對你有點幫助,別忘記給個Star哦!
伺服器規劃
IP | 主機名 | 節點 | 作業系統 |
---|---|---|---|
192.168.175.101 | binghe101 | K8S Master | CentOS 8.0.1905 |
192.168.175.102 | binghe102 | K8S Worker | CentOS 8.0.1905 |
192.168.175.103 | binghe103 | K8S Worker | CentOS 8.0.1905 |
安裝環境版本
軟體名稱 | 軟體版本 | 說明 |
---|---|---|
Docker | 19.03.8 | 提供容器環境 |
docker-compose | 1.25.5 | 定義和執行由多個容器組成的應用 |
K8S | 1.8.12 | 是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單並且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。 |
GitLab | 12.1.6 | 程式碼倉庫(與SVN安裝一個即可) |
Harbor | 1.10.2 | 私有映象倉庫 |
Jenkins | 2.89.3 | 持續整合交付 |
SVN | 1.10.2 | 程式碼倉庫(與GitLab安裝一個即可) |
JDK | 1.8.0_202 | Java執行基礎環境 |
maven | 3.6.3 | 構建專案的基礎外掛 |
伺服器免密碼登入
在各伺服器執行如下命令。
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
將binghe102和binghe103伺服器上的id_rsa.pub檔案複製到binghe101伺服器。
[root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
[root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103
在binghe101伺服器上執行如下命令。
cat ~/.ssh/102 >> ~/.ssh/authorized_keys
cat ~/.ssh/103 >> ~/.ssh/authorized_keys
然後將authorized_keys檔案分別複製到binghe102、binghe103伺服器。
[root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys
刪除binghe101節點上~/.ssh下的102和103檔案。
rm ~/.ssh/102
rm ~/.ssh/103
安裝JDK
需要在每臺伺服器上安裝JDK環境。到Oracle官方下載JDK,我這裡下的JDK版本為1.8.0_202,下載後解壓並配置系統環境變數。
tar -zxvf jdk1.8.0_212.tar.gz
mv jdk1.8.0_212 /usr/local
接下來,配置系統環境變數。
vim /etc/profile
配置項內容如下所示。
JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
接下來執行如下命令使系統環境變數生效。
source /etc/profile
安裝Maven
到Apache官方下載Maven,我這裡下載的Maven版本為3.6.3。下載後直接解壓並配置系統環境變數。
tar -zxvf apache-maven-3.6.3-bin.tar.gz
mv apache-maven-3.6.3-bin /usr/local
接下來,就是配置系統環境變數。
vim /etc/profile
配置項內容如下所示。
JAVA_HOME=/usr/local/jdk1.8.0_212
MAVEN_HOME=/usr/local/apache-maven-3.6.3-bin
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH MAVEN_HOME PATH
接下來執行如下命令使系統環境變數生效。
source /etc/profile
接下來,修改Maven的配置檔案,如下所示。
<localRepository>/home/repository</localRepository>
將Maven下載的Jar包儲存到/home/repository目錄下。
安裝Docker環境
本文件基於Docker 19.03.8 版本搭建Docker環境。
在所有伺服器上建立install_docker.sh指令碼,指令碼內容如下所示。
export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
systemctl enable docker.service
systemctl start docker.service
docker version
在每臺伺服器上為install_docker.sh指令碼賦予可執行許可權,並執行指令碼即可。
安裝docker-compose
注意:在每臺伺服器上安裝docker-compose
1.下載docker-compose檔案
curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
2.為docker-compose檔案賦予可執行許可權
chmod a+x /usr/local/bin/docker-compose
3.檢視docker-compose版本
[root@binghe ~]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
安裝K8S叢集環境
本文件基於K8S 1.8.12版本來搭建K8S叢集
安裝K8S基礎環境
在所有伺服器上建立install_k8s.sh指令碼檔案,指令碼檔案的內容如下所示。
#配置阿里雲映象加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
#安裝nfs-utils
yum install -y nfs-utils
yum install -y wget
#啟動nfs-server
systemctl start nfs-server
systemctl enable nfs-server
#關閉防火牆
systemctl stop firewalld
systemctl disable firewalld
#關閉SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 關閉 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
#修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
# 執行命令以應用
sysctl -p
# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 解除安裝舊版本K8S
yum remove -y kubelet kubeadm kubectl
# 安裝kubelet、kubeadm、kubectl,這裡我安裝的是1.18.2版本,你也可以安裝1.17.2版本
yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2
# 修改docker Cgroup Driver為systemd
# # 將/usr/lib/systemd/system/docker.service檔案中的這一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改為 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在新增 worker 節點時可能會碰到如下錯誤
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
# 設定 docker 映象,提高 docker 映象下載速度和穩定性
# 如果訪問 https://hub.docker.io 速度非常穩定,亦可以跳過這個步驟
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}
# 重啟 docker,並啟動 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet
docker version
在每臺伺服器上為install_k8s.sh指令碼賦予可執行許可權,並執行指令碼即可。
初始化Master節點
只在binghe101伺服器上執行的操作。
1.初始化Master節點的網路環境
注意:下面的命令需要在命令列手動執行。
# 只在 master 節點執行
# export 命令只在當前 shell 會話中有效,開啟新的 shell 視窗後,如果要繼續安裝過程,請重新執行此處的 export 命令
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為 您想要的 dnsName
export APISERVER_NAME=k8s.master
# Kubernetes 容器組所在的網段,該網段安裝完成後,由 kubernetes 建立,事先並不存在於物理網路中
export POD_SUBNET=172.18.0.1/16
echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
2.初始化Master節點
在binghe101伺服器上建立init_master.sh指令碼檔案,檔案內容如下所示。
#!/bin/bash
# 指令碼出錯時終止執行
set -e
if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
echo -e "\033[31;1m請確保您已經設定了環境變數 POD_SUBNET 和 APISERVER_NAME \033[0m"
echo 當前POD_SUBNET=$POD_SUBNET
echo 當前APISERVER_NAME=$APISERVER_NAME
exit 1
fi
# 檢視完整配置選項 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
serviceSubnet: "10.96.0.0/16"
podSubnet: "${POD_SUBNET}"
dnsDomain: "cluster.local"
EOF
# kubeadm init
# 根據伺服器網速的情況,您需要等候 3 - 10 分鐘
kubeadm init --config=kubeadm-config.yaml --upload-certs
# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config
# 安裝 calico 網路外掛
# 參考文件 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安裝calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml
賦予init_master.sh指令碼檔案可執行許可權並執行指令碼。
3.檢視Master節點的初始化結果
(1)確保所有容器組處於Running狀態
# 執行如下命令,等待 3-10 分鐘,直到所有的容器組處於 Running 狀態
watch kubectl get pod -n kube-system -o wide
具體執行如下所示。
[root@binghe101 ~]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide binghe101: Sun May 10 11:01:32 2020
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5b8b769fcd-5dtlp 1/1 Running 0 118s 172.18.203.66 binghe101 <none> <none>
calico-node-fnv8g 1/1 Running 0 118s 192.168.175.101 binghe101 <none> <none>
coredns-546565776c-27t7h 1/1 Running 0 2m1s 172.18.203.67 binghe101 <none> <none>
coredns-546565776c-hjb8z 1/1 Running 0 2m1s 172.18.203.65 binghe101 <none> <none>
etcd-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
kube-apiserver-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
kube-controller-manager-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
kube-proxy-dvgsr 1/1 Running 0 2m1s 192.168.175.101 binghe101 <none> <none>
kube-scheduler-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none>
(2) 檢視 Master 節點初始化結果
kubectl get nodes -o wide
具體執行如下所示。
[root@binghe101 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
binghe101 Ready master 3m28s v1.18.2 192.168.175.101 <none> CentOS Linux 8 (Core) 4.18.0-80.el8.x86_64 docker://19.3.8
初始化Worker節點
1.獲取join命令引數
在Master節點(binghe101伺服器)上執行如下命令獲取join命令引數。
kubeadm token create --print-join-command
具體執行如下所示。
[root@binghe101 ~]# kubeadm token create --print-join-command
W0510 11:04:34.828126 56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
其中,有如下一行輸出。
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
這行程式碼就是獲取到的join命令。
注意:join命令中的token的有效時間為 2 個小時,2小時內,可以使用此 token 初始化任意數量的 worker 節點。
2.初始化Worker節點
針對所有的 worker 節點執行,在這裡,就是在binghe102伺服器和binghe103伺服器上執行。
在命令分別手動執行如下命令。
# 只在 worker 節點執行
# 192.168.175.101 為 master 節點的內網 IP
export MASTER_IP=192.168.175.101
# 替換 k8s.master 為初始化 master 節點時所使用的 APISERVER_NAME
export APISERVER_NAME=k8s.master
echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
# 替換為 master 節點上 kubeadm token create 命令輸出的join
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
具體執行如下所示。
[root@binghe102 ~]# export MASTER_IP=192.168.175.101
[root@binghe102 ~]# export APISERVER_NAME=k8s.master
[root@binghe102 ~]# echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
[root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
W0510 11:08:27.709263 42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
根據輸出結果可以看出,Worker節點加入了K8S叢集。
注意:kubeadm join…就是master 節點上 kubeadm token create 命令輸出的join。
3.檢視初始化結果
在Master節點(binghe101伺服器)執行如下命令檢視初始化結果。
kubectl get nodes -o wide
具體執行如下所示。
[root@binghe101 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
binghe101 Ready master 20m v1.18.2
binghe102 Ready <none> 2m46s v1.18.2
binghe103 Ready <none> 2m46s v1.18.2
注意:kubectl get nodes命令後面加上-o wide引數可以輸出更多的資訊。
重啟K8S叢集引起的問題
1.Worker節點故障不能啟動
Master 節點的 IP 地址發生變化,導致 worker 節點不能啟動。需要重新安裝K8S叢集,並確保所有節點都有固定的內網 IP 地址。
2.Pod崩潰或不能正常訪問
重啟伺服器後使用如下命令檢視Pod的執行狀態。
kubectl get pods --all-namespaces
發現很多 Pod 不在 Running 狀態,此時,需要使用如下命令刪除執行不正常的Pod。
kubectl delete pod <pod-name> -n <pod-namespece>
注意:如果Pod 是使用 Deployment、StatefulSet 等控制器建立的,K8S 將建立新的 Pod 作為替代,重新啟動的 Pod 通常能夠正常工作。
K8S安裝ingress-nginx
注意:在Master節點(binghe101伺服器上執行)
1.建立ingress-nginx名稱空間
建立ingress-nginx-namespace.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
name: ingress-nginx
執行如下命令建立ingress-nginx名稱空間。
kubectl apply -f ingress-nginx-namespace.yaml
2.安裝ingress controller
建立ingress-nginx-mandatory.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
ports:
- port: 80
targetPort: 8080
selector:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
執行如下命令安裝ingress controller。
kubectl apply -f ingress-nginx-mandatory.yaml
3.安裝K8S SVC:ingress-nginx
主要是用來用於暴露pod:nginx-ingress-controller。
建立service-nodeport.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
執行如下命令安裝。
kubectl apply -f service-nodeport.yaml
4.訪問K8S SVC:ingress-nginx
檢視ingress-nginx名稱空間的部署情況,如下所示。
[root@binghe101 k8s]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-796ddcd9b-vfmgn 1/1 Running 1 10h
nginx-ingress-controller-58985cc996-87754 1/1 Running 2 10h
在命令列伺服器命令列輸入如下命令檢視ingress-nginx的埠對映情況。
kubectl get svc -n ingress-nginx
具體如下所示。
[root@binghe101 k8s]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.96.247.2 <none> 80/TCP 7m3s
ingress-nginx NodePort 10.96.40.6 <none> 80:30080/TCP,443:30443/TCP 4m35s
所以,可以通過Master節點(binghe101伺服器)的IP地址和30080埠號來訪問ingress-nginx,如下所示。
[root@binghe101 k8s]# curl 192.168.175.101:30080
default backend - 404
也可以在瀏覽器開啟http://192.168.175.101:30080 來訪問ingress-nginx,如下所示。
K8S安裝gitlab程式碼倉庫
注意:在Master節點(binghe101伺服器上執行)
1.建立k8s-ops名稱空間
建立k8s-ops-namespace.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: Namespace
metadata:
name: k8s-ops
labels:
name: k8s-ops
執行如下命令建立名稱空間。
kubectl apply -f k8s-ops-namespace.yaml
2.安裝gitlab-redis
建立gitlab-redis.yaml檔案,檔案的內容如下所示。
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: k8s-ops
labels:
name: redis
spec:
selector:
matchLabels:
name: redis
template:
metadata:
name: redis
labels:
name: redis
spec:
containers:
- name: redis
image: sameersbn/redis
imagePullPolicy: IfNotPresent
ports:
- name: redis
containerPort: 6379
volumeMounts:
- mountPath: /var/lib/redis
name: data
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: data
hostPath:
path: /data1/docker/xinsrv/redis
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: k8s-ops
labels:
name: redis
spec:
ports:
- name: redis
port: 6379
targetPort: redis
selector:
name: redis
首先,在命令列執行如下命令建立/data1/docker/xinsrv/redis目錄。
mkdir -p /data1/docker/xinsrv/redis
執行如下命令安裝gitlab-redis。
kubectl apply -f gitlab-redis.yaml
3.安裝gitlab-postgresql
建立gitlab-postgresql.yaml,檔案內容如下所示。
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
namespace: k8s-ops
labels:
name: postgresql
spec:
selector:
matchLabels:
name: postgresql
template:
metadata:
name: postgresql
labels:
name: postgresql
spec:
containers:
- name: postgresql
image: sameersbn/postgresql
imagePullPolicy: IfNotPresent
env:
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: DB_EXTENSION
value: pg_trgm
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: data
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
volumes:
- name: data
hostPath:
path: /data1/docker/xinsrv/postgresql
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
namespace: k8s-ops
labels:
name: postgresql
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
name: postgresql
首先,執行如下命令建立/data1/docker/xinsrv/postgresql目錄。
mkdir -p /data1/docker/xinsrv/postgresql
接下來,安裝gitlab-postgresql,如下所示。
kubectl apply -f gitlab-postgresql.yaml
4.安裝gitlab
(1)配置使用者名稱和密碼
首先,在命令列使用base64編碼為使用者名稱和密碼進行轉碼,本示例中,使用的使用者名稱為admin,密碼為admin.1231
轉碼情況如下所示。
[root@binghe101 k8s]# echo -n 'admin' | base64
YWRtaW4=
[root@binghe101 k8s]# echo -n 'admin.1231' | base64
YWRtaW4uMTIzMQ==
轉碼後的使用者名稱為:YWRtaW4= 密碼為:YWRtaW4uMTIzMQ==
也可以對base64編碼後的字串解碼,例如,對密碼字串解碼,如下所示。
[root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode
admin.1231
接下來,建立secret-gitlab.yaml檔案,主要是使用者來配置GitLab的使用者名稱和密碼,檔案內容如下所示。
apiVersion: v1
kind: Secret
metadata:
namespace: k8s-ops
name: git-user-pass
type: Opaque
data:
username: YWRtaW4=
password: YWRtaW4uMTIzMQ==
執行配置檔案的內容,如下所示。
kubectl create -f ./secret-gitlab.yaml
(2)安裝GitLab
建立gitlab.yaml檔案,檔案的內容如下所示。
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab
namespace: k8s-ops
labels:
name: gitlab
spec:
selector:
matchLabels:
name: gitlab
template:
metadata:
name: gitlab
labels:
name: gitlab
spec:
containers:
- name: gitlab
image: sameersbn/gitlab:12.1.6
imagePullPolicy: IfNotPresent
env:
- name: TZ
value: Asia/Shanghai
- name: GITLAB_TIMEZONE
value: Beijing
- name: GITLAB_SECRETS_DB_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: long-and-random-alpha-numeric-string
- name: GITLAB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: git-user-pass
key: password
- name: GITLAB_ROOT_EMAIL
value: 12345678@qq.com
- name: GITLAB_HOST
value: gitlab.binghe.com
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SSH_PORT
value: "30022"
- name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
value: "true"
- name: GITLAB_NOTIFY_PUSHER
value: "false"
- name: GITLAB_BACKUP_SCHEDULE
value: daily
- name: GITLAB_BACKUP_TIME
value: 01:00
- name: DB_TYPE
value: postgres
- name: DB_HOST
value: postgresql
- name: DB_PORT
value: "5432"
- name: DB_USER
value: gitlab
- name: DB_PASS
value: passw0rd
- name: DB_NAME
value: gitlab_production
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
ports:
- name: http
containerPort: 80
- name: ssh
containerPort: 22
volumeMounts:
- mountPath: /home/git/data
name: data
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 180
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
volumes:
- name: data
hostPath:
path: /data1/docker/xinsrv/gitlab
---
apiVersion: v1
kind: Service
metadata:
name: gitlab
namespace: k8s-ops
labels:
name: gitlab
spec:
ports:
- name: http
port: 80
nodePort: 30088
- name: ssh
port: 22
targetPort: ssh
nodePort: 30022
type: NodePort
selector:
name: gitlab
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitlab
namespace: k8s-ops
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: gitlab.binghe.com
http:
paths:
- backend:
serviceName: gitlab
servicePort: http
注意:在配置GitLab時,監聽主機時,不能使用IP地址,需要使用主機名或者域名,上述配置中,我使用的是gitlab.binghe.com主機名。
在命令列執行如下命令建立/data1/docker/xinsrv/gitlab目錄。
mkdir -p /data1/docker/xinsrv/gitlab
安裝GitLab,如下所示。
kubectl apply -f gitlab.yaml
5.安裝完成
檢視k8s-ops名稱空間部署情況,如下所示。
[root@binghe101 k8s]# kubectl get pod -n k8s-ops
NAME READY STATUS RESTARTS AGE
gitlab-7b459db47c-5vk6t 0/1 Running 0 11s
postgresql-79567459d7-x52vx 1/1 Running 0 30m
redis-67f4cdc96c-h5ckz 1/1 Running 1 10h
也可以使用如下命令檢視。
[root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
NAME READY STATUS RESTARTS AGE
gitlab-7b459db47c-5vk6t 0/1 Running 0 36s
postgresql-79567459d7-x52vx 1/1 Running 0 30m
redis-67f4cdc96c-h5ckz 1/1 Running 1 10h
二者效果一樣。
接下來,檢視GitLab的埠對映,如下所示。
[root@binghe101 k8s]# kubectl get svc -n k8s-ops
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitlab NodePort 10.96.153.100 <none> 80:30088/TCP,22:30022/TCP 2m42s
postgresql ClusterIP 10.96.203.119 <none> 5432/TCP 32m
redis ClusterIP 10.96.107.150 <none> 6379/TCP 10h
此時,可以看到,可以通過Master節點(binghe101)的主機名gitlab.binghe.com和埠30088就能夠訪問GitLab。由於我這裡使用的是虛擬機器來搭建相關的環境,在本機訪問虛擬機器對映的gitlab.binghe.com時,需要配置本機的hosts檔案,在本機的hosts檔案中加入如下配置項。
192.168.175.101 gitlab.binghe.com
注意:在Windows作業系統中,hosts檔案所在的目錄如下。
C:\Windows\System32\drivers\etc
接下來,就可以在瀏覽器中通過連結:http://gitlab.binghe.com:30088 來訪問GitLab了,如下所示。
此時,可以通過使用者名稱root和密碼admin.1231來登入GitLab了。
注意:這裡的使用者名稱是root而不是admin,因為root是GitLab預設的超級使用者。
登入後的介面如下所示。
到此,K8S安裝gitlab完成。
安裝Harbor私有倉庫
注意:這裡將Harbor私有倉庫安裝在Master節點(binghe101伺服器)上,實際生產環境中建議安裝在其他伺服器。
1.下載Harbor的離線安裝版本
wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz
2.解壓Harbor的安裝包
tar -zxvf harbor-offline-installer-v1.10.2.tgz
解壓成功後,會在伺服器當前目錄生成一個harbor目錄。
3.配置Harbor
注意:這裡,我將Harbor的埠修改成了1180,如果不修改Harbor的埠,預設的埠是80。
(1)修改harbor.yml檔案
cd harbor
vim harbor.yml
修改的配置項如下所示。
hostname: 192.168.175.101
http:
port: 1180
harbor_admin_password: binghe123
###並把https註釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
#port: 443
#certificate: /your/certificate/path
#private_key: /your/private/key/path
(2)修改daemon.json檔案
修改/etc/docker/daemon.json檔案,沒有的話就建立,在/etc/docker/daemon.json檔案中新增如下內容。
[root@binghe~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
"insecure-registries":["192.168.175.101:1180"]
}
也可以在伺服器上使用 ip addr 命令檢視本機所有的IP地址段,將其配置到/etc/docker/daemon.json檔案中。這裡,我配置後的檔案內容如下所示。
{
"registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
"insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.175.101:1180"]
}
4.安裝並啟動harbor
配置完成後,輸入如下命令即可安裝並啟動Harbor
[root@binghe harbor]# ./install.sh
5.登入Harbor並新增賬戶
安裝成功後,在瀏覽器位址列輸入http://192.168.175.101:1180開啟連結,如下圖所示。
輸入使用者名稱admin和密碼binghe123,登入系統,如下圖所示
接下來,我們選擇使用者管理,新增一個管理員賬戶,為後續打包Docker映象和上傳Docker映象做準備。新增賬戶的步驟如下所示。
此處填寫的密碼為Binghe123。
點選確定後,如下所示。
此時,賬戶binghe還不是管理員,此時選中binghe賬戶,點選“設定為管理員”。
此時,binghe賬戶就被設定為管理員了。到此,Harbor的安裝就完成了。
6.修改Harbor埠
如果安裝Harbor後,大家需要修改Harbor的埠,可以按照如下步驟修改Harbor的埠,這裡,我以將80埠修改為1180埠為例
(1)修改harbor.yml檔案
cd harbor
vim harbor.yml
修改的配置項如下所示。
hostname: 192.168.175.101
http:
port: 1180
harbor_admin_password: binghe123
###並把https註釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
#port: 443
#certificate: /your/certificate/path
#private_key: /your/private/key/path
(2)修改docker-compose.yml檔案
vim docker-compose.yml
修改的配置項如下所示。
ports:
- 1180:80
(3)修改config.yml檔案
cd common/config/registry
vim config.yml
修改的配置項如下所示。
realm: http://192.168.175.101:1180/service/token
(4)重啟Docker
systemctl daemon-reload
systemctl restart docker.service
(5)重啟Harbor
[root@binghe harbor]# docker-compose down
Stopping harbor-log ... done
Removing nginx ... done
Removing harbor-portal ... done
Removing harbor-jobservice ... done
Removing harbor-core ... done
Removing redis ... done
Removing registry ... done
Removing registryctl ... done
Removing harbor-db ... done
Removing harbor-log ... done
Removing network harbor_harbor
[root@binghe harbor]# ./prepare
prepare base dir is set to /mnt/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
[root@binghe harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db ... done
Creating redis ... done
Creating registry ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal ... done
Creating nginx ... done
[root@binghe harbor]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
安裝Jenkins(一般的做法)
1.安裝nfs(之前安裝過的話,可以省略此步)
使用 nfs 最大的問題就是寫許可權,可以使用 kubernetes 的 securityContext/runAsUser 指定 jenkins 容器中執行 jenkins 的使用者 uid,以此來指定 nfs 目錄的許可權,讓 jenkins 容器可寫;也可以不限制,讓所有使用者都可以寫。這裡為了簡單,就讓所有使用者可寫了。
如果之前已經安裝過nfs,則這一步可以省略。找一臺主機,安裝 nfs,這裡,我以在Master節點(binghe101伺服器)上安裝nfs為例。
在命令列輸入如下命令安裝並啟動nfs。
yum install nfs-utils -y
systemctl start nfs-server
systemctl enable nfs-server
2.建立nfs共享目錄
在Master節點(binghe101伺服器)上建立 /opt/nfs/jenkins-data
目錄作為nfs的共享目錄,如下所示。
mkdir -p /opt/nfs/jenkins-data
接下來,編輯/etc/exports檔案,如下所示。
vim /etc/exports
在/etc/exports檔案檔案中新增如下一行配置。
/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)
這裡的 ip 使用 kubernetes node 節點的 ip 範圍,後面的 all_squash
選項會將所有訪問的使用者都對映成 nfsnobody 使用者,不管你是什麼使用者訪問,最終都會壓縮成 nfsnobody,所以只要將 /opt/nfs/jenkins-data
的屬主改為 nfsnobody,那麼無論什麼使用者來訪問都具有寫許可權。
這個選項在很多機器上由於使用者 uid 不規範導致啟動程式的使用者不同,但是同時要對一個共享目錄具有寫許可權時很有效。
接下來,為 /opt/nfs/jenkins-data
目錄授權,並重新載入nfs,如下所示。
chown -R 1000 /opt/nfs/jenkins-data/
systemctl reload nfs-server
在K8S叢集中任意一個節點上使用如下命令進行驗證:
showmount -e NFS_IP
如果能夠看到 /opt/nfs/jenkins-data 就表示 ok 了。
具體如下所示。
[root@binghe101 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24
[root@binghe102 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24
3.建立PV
Jenkins 其實只要載入對應的目錄就可以讀取之前的資料,但是由於 deployment 無法定義儲存卷,因此我們只能使用 StatefulSet。
首先建立 pv,pv 是給 StatefulSet 使用的,每次 StatefulSet 啟動都會通過 volumeClaimTemplates 這個模板去建立 pvc,因此必須得有 pv,才能供 pvc 繫結。
建立jenkins-pv.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
spec:
nfs:
path: /opt/nfs/jenkins-data
server: 192.168.175.101
accessModes: ["ReadWriteOnce"]
capacity:
storage: 1Ti
我這裡給了 1T儲存空間,可以根據實際配置。
執行如下命令建立pv。
kubectl apply -f jenkins-pv.yaml
4.建立serviceAccount
建立service account,因為 jenkins 後面需要能夠動態建立 slave,因此它必須具備一些許可權。
建立jenkins-service-account.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
上述配置中,建立了一個 RoleBinding 和一個 ServiceAccount,並且將 RoleBinding 的許可權繫結到這個使用者上。所以,jenkins 容器必須使用這個 ServiceAccount 執行才行,不然 RoleBinding 的許可權它將不具備。
RoleBinding 的許可權很容易就看懂了,因為 jenkins 需要建立和刪除 slave,所以才需要上面這些許可權。至於 secrets 許可權,則是 https 證照。
執行如下命令建立serviceAccount。
kubectl apply -f jenkins-service-account.yaml
5.安裝Jenkins
建立jenkins-statefulset.yaml檔案,檔案內容如下所示。
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
labels:
name: jenkins
spec:
selector:
matchLabels:
name: jenkins
serviceName: jenkins
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
name: jenkins
labels:
name: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins
image: docker.io/jenkins/jenkins:lts
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- containerPort: 32100
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 4
memory: 4Gi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
# value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12 # ~2 minutes
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12 # ~2 minutes
# pvc 模板,對應之前的 pv
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Ti
jenkins 部署時需要注意它的副本數,你的副本數有多少就要有多少個 pv,同樣,儲存會有多倍消耗。這裡我只使用了一個副本,因此前面也只建立了一個 pv。
使用如下命令安裝Jenkins。
kubectl apply -f jenkins-statefulset.yaml
6.建立Service
建立jenkins-service.yaml檔案,檔案內容如下所示。
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
# type: LoadBalancer
selector:
name: jenkins
# ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
#externalTrafficPolicy: Local
ports:
- name: http
port: 80
nodePort: 31888
targetPort: 8080
protocol: TCP
- name: jenkins-agent
port: 32100
nodePort: 32100
targetPort: 32100
protocol: TCP
type: NodePort
使用如下命令安裝Service。
kubectl apply -f jenkins-service.yaml
7.安裝 ingress
jenkins 的 web 介面需要從叢集外訪問,這裡我們選擇的是使用 ingress。建立jenkins-ingress.yaml檔案,檔案內容如下所示。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 31888
host: jekins.binghe.com
這裡,需要注意的是host必須配置為域名或者主機名,否則會報錯,如下所示。
The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name, not an IP address
使用如下命令安裝ingress。
kubectl apply -f jenkins-ingress.yaml
最後,由於我這裡使用的是虛擬機器來搭建相關的環境,在本機訪問虛擬機器對映的jekins.binghe.com時,需要配置本機的hosts檔案,在本機的hosts檔案中加入如下配置項。
192.168.175.101 jekins.binghe.com
注意:在Windows作業系統中,hosts檔案所在的目錄如下。
C:\Windows\System32\drivers\etc
接下來,就可以在瀏覽器中通過連結:http://jekins.binghe.com:31888 來訪問Jekins了。
物理機安裝SVN
這裡,以在Master節點(binghe101伺服器)上安裝SVN為例。
1.使用yum安裝SVN
在命令列執行如下命令安裝SVN。
yum -y install subversion
2.建立SVN庫
依次執行如下命令。
#建立/data/svn
mkdir -p /data/svn
#初始化svn
svnserve -d -r /data/svn
#建立程式碼倉庫
svnadmin create /data/svn/test
3.配置SVN
mkdir /data/svn/conf
cp /data/svn/test/conf/* /data/svn/conf/
cd /data/svn/conf/
[root@binghe101 conf]# ll
總用量 20
-rw-r--r-- 1 root root 1080 5月 12 02:17 authz
-rw-r--r-- 1 root root 885 5月 12 02:17 hooks-env.tmpl
-rw-r--r-- 1 root root 309 5月 12 02:17 passwd
-rw-r--r-- 1 root root 4375 5月 12 02:17 svnserve.conf
- 配置authz檔案,
vim authz
配置後的內容如下所示。
[aliases]
# joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average
[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
SuperAdmin = admin
binghe = admin,binghe
# [/foo/bar]
# harry = rw
# &joe = r
# * =
# [repository:/baz/fuz]
# @harry_and_sally = rw
# * = r
[test:/]
@SuperAdmin=rw
@binghe=rw
- 配置passwd檔案
vim passwd
配置後的內容如下所示。
[users]
# harry = harryssecret
# sally = sallyssecret
admin = admin123
binghe = binghe123
- 配置 svnserve.conf
vim svnserve.conf
配置後的檔案如下所示。
### This file controls the configuration of the svnserve daemon, if you
### use it to allow access to this repository. (If you only allow
### access through http: and/or file: URLs, then this file is
### irrelevant.)
### Visit http://subversion.apache.org/ for more information.
[general]
### The anon-access and auth-access options control access to the
### repository for unauthenticated (a.k.a. anonymous) users and
### authenticated users, respectively.
### Valid values are "write", "read", and "none".
### Setting the value to "none" prohibits both reading and writing;
### "read" allows read-only access, and "write" allows complete
### read/write access to the repository.
### The sample settings below are the defaults and specify that anonymous
### users have read-only access to the repository, while authenticated
### users have read and write access to the repository.
anon-access = none
auth-access = write
### The password-db option controls the location of the password
### database file. Unless you specify a path starting with a /,
### the file's location is relative to the directory containing
### this configuration file.
### If SASL is enabled (see below), this file will NOT be used.
### Uncomment the line below to use the default password file.
password-db = /data/svn/conf/passwd
### The authz-db option controls the location of the authorization
### rules for path-based access control. Unless you specify a path
### starting with a /, the file's location is relative to the
### directory containing this file. The specified path may be a
### repository relative URL (^/) or an absolute file:// URL to a text
### file in a Subversion repository. If you don't specify an authz-db,
### no path-based access control is done.
### Uncomment the line below to use the default authorization file.
authz-db = /data/svn/conf/authz
### The groups-db option controls the location of the file with the
### group definitions and allows maintaining groups separately from the
### authorization rules. The groups-db file is of the same format as the
### authz-db file and should contain a single [groups] section with the
### group definitions. If the option is enabled, the authz-db file cannot
### contain a [groups] section. Unless you specify a path starting with
### a /, the file's location is relative to the directory containing this
### file. The specified path may be a repository relative URL (^/) or an
### absolute file:// URL to a text file in a Subversion repository.
### This option is not being used by default.
# groups-db = groups
### This option specifies the authentication realm of the repository.
### If two repositories have the same authentication realm, they should
### have the same password database, and vice versa. The default realm
### is repository's uuid.
realm = svn
### The force-username-case option causes svnserve to case-normalize
### usernames before comparing them against the authorization rules in the
### authz-db file configured above. Valid values are "upper" (to upper-
### case the usernames), "lower" (to lowercase the usernames), and
### "none" (to compare usernames as-is without case conversion, which
### is the default behavior).
# force-username-case = none
### The hooks-env options specifies a path to the hook script environment
### configuration file. This option overrides the per-repository default
### and can be used to configure the hook script environment for multiple
### repositories in a single file, if an absolute path is specified.
### Unless you specify an absolute path, the file's location is relative
### to the directory containing this file.
# hooks-env = hooks-env
[sasl]
### This option specifies whether you want to use the Cyrus SASL
### library for authentication. Default is false.
### Enabling this option requires svnserve to have been built with Cyrus
### SASL support; to check, run 'svnserve --version' and look for a line
### reading 'Cyrus SASL authentication is available.'
# use-sasl = true
### These options specify the desired strength of the security layer
### that you want SASL to provide. 0 means no encryption, 1 means
### integrity-checking only, values larger than 1 are correlated
### to the effective key length for encryption (e.g. 128 means 128-bit
### encryption). The values below are the defaults.
# min-encryption = 0
# max-encryption = 256
接下來,將/data/svn/conf目錄下的svnserve.conf檔案複製到/data/svn/test/conf/目錄下。如下所示。
[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/
cp:是否覆蓋'/data/svn/test/conf/svnserve.conf'? y
4.啟動SVN服務
(1)建立svnserve.service服務
建立svnserve.service檔案
vim /usr/lib/systemd/system/svnserve.service
檔案的內容如下所示。
[Unit]
Description=Subversion protocol daemon
After=syslog.target network.target
Documentation=man:svnserve(8)
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/svnserve
#ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
ExecStart=/usr/bin/svnserve --daemon $OPTIONS
PrivateTmp=yes
[Install]
WantedBy=multi-user.target
接下來執行如下命令使配置生效。
systemctl daemon-reload
命令執行成功後,修改 /etc/sysconfig/svnserve 檔案。
vim /etc/sysconfig/svnserve
修改後的檔案內容如下所示。
# OPTIONS is used to pass command-line arguments to svnserve.
#
# Specify the repository location in -r parameter:
OPTIONS="-r /data/svn"
(2)啟動SVN
首先檢視SVN狀態,如下所示。
[root@itence10 conf]# systemctl status svnserve.service
● svnserve.service - Subversion protocol daemon
Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:svnserve(8)
可以看到,此時SVN並沒有啟動,接下來,需要啟動SVN。
systemctl start svnserve.service
設定SVN服務開機自啟動。
systemctl enable svnserve.service
接下來,就可以下載安裝TortoiseSVN,輸入連結svn://192.168.0.10/test 並輸入使用者名稱binghe,密碼binghe123來連線SVN了。
物理機安裝Jenkins
注意:安裝Jenkins之前需要安裝JDK和Maven,我這裡同樣將Jenkins安裝在Master節點(binghe101伺服器)。
1.啟用Jenkins庫
執行以下命令以下載repo檔案並匯入GPG金鑰:
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
2.安裝Jenkins
執行如下命令安裝Jenkis。
yum install jenkins
接下來,修改Jenkins預設埠,如下所示。
vim /etc/sysconfig/jenkins
修改後的兩項配置如下所示。
JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java"
JENKINS_PORT="18080"
此時,已經將Jenkins的埠由8080修改為18080
3.啟動Jenkins
在命令列輸入如下命令啟動Jenkins。
systemctl start jenkins
配置Jenkins開機自啟動。
systemctl enable jenkins
檢視Jenkins的執行狀態。
[root@itence10 ~]# systemctl status jenkins
● jenkins.service - LSB: Jenkins Automation Server
Loaded: loaded (/etc/rc.d/init.d/jenkins; generated)
Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago
Docs: man:systemd-sysv-generator(8)
Tasks: 71 (limit: 26213)
Memory: 550.8M
說明,Jenkins啟動成功。
配置Jenkins執行環境
1.登入Jenkins
首次安裝後,需要配置Jenkins的執行環境。首先,在瀏覽器位址列訪問連結http://192.168.0.10:18080,開啟Jenkins介面。
根據提示使用如下命令到伺服器上找密碼值,如下所示。
[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
71af861c2ab948a1b6efc9f7dde90776
將密碼71af861c2ab948a1b6efc9f7dde90776複製到文字框,點選繼續。會跳轉到自定義Jenkins頁面,如下所示。
這裡,可以直接選擇“安裝推薦的外掛”。之後會跳轉到一個安裝外掛的頁面,如下所示。
此步驟可能有下載失敗的情況,可直接忽略。
2.安裝外掛
需要安裝的外掛
-
Kubernetes Cli Plugin:該外掛可直接在Jenkins中使用kubernetes命令列進行操作。
-
Kubernetes plugin: 使用kubernetes則需要安裝該外掛
-
Kubernetes Continuous Deploy Plugin:kubernetes部署外掛,可根據需要使用
還有更多的外掛可供選擇,可點選 系統管理->管理外掛進行管理和新增,安裝相應的Docker外掛、SSH外掛、Maven外掛。其他的外掛可以根據需要進行安裝。如下圖所示。
3.配置Jenkins
(1)配置JDK和Maven
在Global Tool Configuration中配置JDK和Maven,如下所示,開啟Global Tool Configuration介面。
接下來就開始配置JDK和Maven了。
由於我在伺服器上將Maven安裝在/usr/local/maven-3.6.3目錄下,所以,需要在“Maven 配置”中進行配置,如下圖所示。
接下來,配置JDK,如下所示。
注意:不要勾選“Install automatically”
接下來,配置Maven,如下所示。
注意:不要勾選“Install automatically”
(2)配置SSH
進入Jenkins的Configure System介面配置SSH,如下所示。
找到 SSH remote hosts 進行配置。
配置完成後,點選Check connection按鈕,會顯示 Successfull connection。如下所示。
至此,Jenkins的基本配置就完成了。
Jenkins釋出Docker專案到K8s叢集
1.調整SpringBoot專案的配置
實現,SpringBoot專案中啟動類所在的模組的pom.xml需要引入打包成Docker映象的配置,如下所示。
<properties>
<docker.repostory>192.168.0.10:1180</docker.repostory>
<docker.registry.name>test</docker.registry.name>
<docker.image.tag>1.0.0</docker.image.tag>
<docker.maven.plugin.version>1.4.10</docker.maven.plugin.version>
</properties>
<build>
<finalName>test-starter</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<!-- docker的maven外掛,官網:https://github.com/spotify/docker‐maven‐plugin -->
<!-- Dockerfile maven plugin -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>${docker.maven.plugin.version}</version>
<executions>
<execution>
<id>default</id>
<goals>
<!--如果package時不想用docker打包,就註釋掉這個goal-->
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>
<contextDirectory>${project.basedir}</contextDirectory>
<!-- harbor 倉庫使用者名稱及密碼-->
<useMavenSettingsForAuth>useMavenSettingsForAuth>true</useMavenSettingsForAuth>
<repository>${docker.repostory}/${docker.registry.name}/${project.artifactId}</repository>
<tag>${docker.image.tag}</tag>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
<resources>
<!-- 指定 src/main/resources下所有檔案及資料夾為資原始檔 -->
<resource>
<directory>src/main/resources</directory>
<targetPath>${project.build.directory}/classes</targetPath>
<includes>
<include>**/*</include>
</includes>
<filtering>true</filtering>
</resource>
</resources>
</build>
接下來,在SpringBoot啟動類所在模組的根目錄建立Dockerfile,內容示例如下所示。
#新增依賴環境,前提是將Java8的Docker映象從官方映象倉庫pull下來,然後上傳到自己的Harbor私有倉庫中
FROM 192.168.0.10:1180/library/java:8
#指定映象製作作者
MAINTAINER binghe
#執行目錄
VOLUME /tmp
#將本地的檔案拷貝到容器
ADD target/*jar app.jar
#啟動容器後自動執行的命令
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
根據實際情況,自行修改。
注意:FROM 192.168.0.10:1180/library/java:8的前提是執行如下命令。
docker pull java:8
docker tag java:8 192.168.0.10:1180/library/java:8
docker login 192.168.0.10:1180
docker push 192.168.0.10:1180/library/java:8
在SpringBoot啟動類所在模組的根目錄建立yaml檔案,錄入叫做test.yaml檔案,內容如下所示。
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-starter
labels:
app: test-starter
spec:
replicas: 1
selector:
matchLabels:
app: test-starter
template:
metadata:
labels:
app: test-starter
spec:
containers:
- name: test-starter
image: 192.168.0.10:1180/test/test-starter:1.0.0
ports:
- containerPort: 8088
nodeSelector:
clustertype: node12
---
apiVersion: v1
kind: Service
metadata:
name: test-starter
labels:
app: test-starter
spec:
ports:
- name: http
port: 8088
nodePort: 30001
type: NodePort
selector:
app: test-starter
2.Jenkins配置釋出專案
將專案上傳到SVN程式碼庫,例如地址為svn://192.168.0.10/test
接下來,在Jenkins中配置自動釋出。步驟如下所示。
點選新建Item。
在描述文字框中輸入描述資訊,如下所示。
接下來,配置SVN資訊。
注意:配置GitLab的步驟與SVN相同,不再贅述。
定位到Jenkins的“構建模組”,使用Execute Shell來構建釋出專案到K8S叢集。
執行的命令依次如下所示。
#刪除本地原有的映象,不會影響Harbor倉庫中的映象
docker rmi 192.168.0.10:1180/test/test-starter:1.0.0
#使用Maven編譯、構建Docker映象,執行完成後本地Docker容器中會重新構建映象檔案
/usr/local/maven-3.6.3/bin/mvn -f ./pom.xml clean install -Dmaven.test.skip=true
#登入 Harbor倉庫
docker login 192.168.0.10:1180 -u binghe -p Binghe123
#上傳映象到Harbor倉庫
docker push 192.168.0.10:1180/test/test-starter:1.0.0
#停止並刪除K8S叢集中執行的
/usr/bin/kubectl delete -f test.yaml
#將Docker映象重新發布到K8S叢集
/usr/bin/kubectl apply -f test.yaml
好了,今天就到這兒吧,我是冰河,我們下期見~~