環境介紹
主機資訊
注意:由於資源有限,透過三臺vmware 虛擬機器進行安裝。
主機名 | IP | 作業系統 | 配置 |
---|---|---|---|
k8s-master | 192.168.199.101 | Centos7.9 | 2CPU、4G記憶體、100G磁碟 |
k8s-node01 | 192.168.199.102 | Centos7.9 | 2CPU、4G記憶體、100G磁碟 |
k8s-node02 | 192.168.199.103 | Centos7.9 | 2CPU、4G記憶體、100G磁碟 |
軟體版本資訊
軟體名 | 版本號 |
---|---|
containerd | v1.7.14 |
k8s | v1.28.0 |
flannel | v0.25.1 |
traefik | v2.11 |
環境初始化
注意:所有主機執行初始化操作。
配置yum倉庫
cd /etc/yum.repos.d/
mkdir bak ; mv *.repo bak/
curl https://mirrors.aliyun.com/repo/Centos-7.repo -o Centos-7.repo
curl https://mirrors.aliyun.com/repo/epel-7.repo -o epel-7.repo
sed -i '/aliyuncs/d' Centos-7.repo
#新增 kubernetes 倉庫
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
修改主機名
hostnamectl set-hostname k8s-master
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.199.101 k8s-master
192.168.199.102 k8s-node01
192.168.199.103 k8s-node02
#複製到兩臺node主機
root@k8s-master(192.168.199.101)~>for i in 1 2; do scp /etc/hosts 192.168.199.4$i:/etc/ ; done
配置ntp服務
yum install chrony ntpdate -y
sed "s/^server/#server/g" /etc/chrony.conf
echo 'server tiger.sina.com.cn iburst' >> /etc/chrony.conf
echo 'server ntp1.aliyun.com iburst' >> /etc/chrony.conf
systemctl enable chronyd ; systemctl start chronyd
ntpdate tiger.sina.com.cn
關閉selinux和firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl disable firewalld; systemctl stop firewalld
這裡修改完成後,建議重啟主機。
reboot
關閉swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
匯入模組
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
配置核心引數
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
sysctl --system
配置支援ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install -y ipset ipvsadm
部署Containerd
注意:所有主機安裝containerd
nerdctl 下載地址:https://github.com/containerd/nerdctl/releases/download/v1.7.5/nerdctl-full-1.7.5-linux-amd64.tar.gz
tar xf nerdctl-full-1.7.5-linux-amd64.tar.gz -C /usr/local/
生成containerd配置檔案
mk /etc/containerd/
cd /etc/containerd/
containerd config default > config.toml
vim config.toml
...
SystemdCgroup = false #修改為true
...
再修改/etc/containerd/config.toml中的
[plugins."io.containerd.grpc.v1.cri"]
...
# sandbox_image = "k8s.gcr.io/pause:3.6"
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" #這裡一定要注意,要根據下載到本地 pause映象的版本來進行修改,否則初始化會過不去。
啟動服務
systemctl enable --now containerd buildkit
檢視版本
ctr version
Client:
Version: v1.7.14
Revision: dcf2847247e18caba8dce86522029642f60fe96b
Go version: go1.21.8
Server:
Version: v1.7.14
Revision: dcf2847247e18caba8dce86522029642f60fe96b
UUID: 426750f8-14ca-4490-8cca-3ded2cc2a21c
k8s-master安裝操作
使用kubeadm部署k8s
注意:僅 k8s-master 節點執行此章節
安裝程式包
yum install -y kubeadm-1.28.0 kubelet-1.28.0 kubectl-1.28.0
生成預設配置檔案
kubeadm completion bash > /etc/bash_completion.d/kubeadm
kubectl completion bash > /etc/bash_completion.d/kubectl
source /etc/bash_completion.d/kubectl /etc/bash_completion.d/kubeadm
kubeadm config print init-defaults > kubeadm-init.yml
修改配置檔案
vim kubeadm-init.yml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
--------------------修改如下--------------------
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 0s #修改token過期時間為無限制
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.199.41 #修改為k8s-master節點IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master #修改為主機名
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #替換為國內的映象倉庫
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 #為pod網路指定網路段
---
#申明cgroup用 systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
#啟用ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
檢視及下載映象檔案
#檢視映象
kubeadm config images list --config=kubeadm-init.yml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1
#下載映象
kubeadm config images pull --config=kubeadm-init.yml
設定kubelet開機啟動
#不設定在初始化叢集會有告警資訊
systemctl enable kubelet.service
初始化k8s叢集
kubeadm init --config=kubeadm-init.yml | tee kubeadm-init.log
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.199.41:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:94805e71436365f20bca9e1e4a63509578bdc39c2428302c915b0c01fc111430
設定使用叢集許可權
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
檢視node節點
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 105s v1.28.0
安裝網路外掛flannet
注意:僅 k8s-master 節點執行此章節
下載配置檔案
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
安裝執行
kubectl apply -f kube-flannel.yml
檢視k8s名稱空間
kubectl get ns
NAME STATUS AGE
default Active 3m44s
kube-flannel Active 23s
kube-node-lease Active 3m44s
kube-public Active 3m44s
kube-system Active 3m44s
kubectl get po -n kube-flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-465rx 1/1 Running 0 29s
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 3m57s v1.28.0
k8s-node節點操作
安裝程式包
yum install -y kubeadm-1.28.0 kubelet-1.28.0 kubectl-1.28.0
設定kubelet開機啟動
#不設定在初始化叢集會有告警資訊
systemctl enable kubelet.service
加入叢集
kubeadm join 192.168.199.41:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:94805e71436365f20bca9e1e4a63509578bdc39c2428302c915b0c01fc111430
使用叢集
檢視叢集節點
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 23m v1.28.0
k8s-node01 Ready <none> 62s v1.28.0
建立pod
kubectl run ngx --image=nginx:alpine --port=80 --restart=Always
檢視pod
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ngx 1/1 Running 0 16s 10.244.1.2 k8s-node01 <none> <none>
建立service
kubectl expose pod ngx --port=80 --target-port=80 --name=ngx
檢視service
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m
ngx ClusterIP 10.110.223.232 <none> 80/TCP 22s
叢集內,透過 cluster-ip 即可訪問到 pod服務
curl -I 10.110.223.232
HTTP/1.1 200 OK
Server: nginx/1.25.4
Date: Tue, 16 Apr 2024 03:51:18 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 14 Feb 2024 16:20:36 GMT
Connection: keep-alive
ETag: "65cce854-267"
Accept-Ranges: bytes
ingress-controller
注意:k8s-mater 執行操作。
一直使用的 traefik
作為 ingress-controller
,此次專案更改為 APISIX
作為 ingress-controller
。
APISIX
官方文件:https://apisix.apache.org/docs/ingress-controller/getting-started/
Traefik VS apisix - https://apisix.incubator.apache.org/zh/blog/2022/12/19/apisix-ingress-better-than-traefik/
查閱了很久官方文件和網路上的資料,也沒能實現想要的功能,因此詳細記錄本次安裝除錯過程,官方推薦使用 helm
安裝。
安裝helm
wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz
tar xf helm-v3.14.4-linux-amd64.tar.gz
cp -a linux-amd64/helm /usr/local/bin/
helm version
version.BuildInfo{Version:"v3.14.4", GitCommit:"81c902a123462fd4052bc5e9aa9c513c4c8fc142", GitTreeState:"clean", GoVersion:"go1.21.9"}
下載apisix
helm repo add apisix https://charts.apiseven.com
helm repo update
helm pull apisix/apisix
tar xf apisix-2.6.0.tgz
cd apisix
apisix常用配置
使用過 helm
的同學都知道,helm
需要根據自己的需要進行定製化。因此,該步驟就尤為重要。
官方文件 helm
安裝 apisix :https://apisix.apache.org/docs/helm-chart/apisix/ 只是給了一個通用的例子,我們這裡需要更具自己的環境進行自定義。
etcd叢集
首先,apisix
會建立一個 etcd
叢集(三個節點) 為了可用性需要注意以下幾點:
- 必須三個
etcd
落在三個不同的物理節點上 etcd
資料持久化問題,這裡就需要用到storageclass
這裡就需要配置 storageclass
,根據我這裡的環境,1臺master、2臺node,沒有額外的儲存環境,因此退而求其次。採用如下方案:
- 每臺節點建立一個固定的目錄,然後讓
pv
指向該目錄,透過storagecalss
來實現pv
和pvc
的繫結。
在每臺主機上建立目錄:
#該目錄作為etcd資料持久化目錄
mkdir -p /data/k8s/etcd-data
建立pv
vim pv-local.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local-1 #注意名字
spec:
capacity:
storage: 20Gi #容量大小
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage # 建立storageClass時要用到
local:
path: /data/k8s/etcd-data #本地持久化目錄
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-master #繫結到那個節點
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local-2 #注意名字
spec:
capacity:
storage: 20Gi #容量大小
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage # 建立storageClass時要用到
local:
path: /data/k8s/etcd-data #本地持久化目錄
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node01 #繫結到那個節點
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local-3 #注意名字
spec:
capacity:
storage: 20Gi #容量大小
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage # 建立storageClass時要用到
local:
path: /data/k8s/etcd-data #本地持久化目錄
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node02 #繫結到那個節點
執行清單檔案:
kubectl apply -f pv-local.yaml
建立storageclass
vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
執行清單檔案:
kubectl apply -f storageclass.yaml
檢視
kubectl get pv,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-local-1 20Gi RWO Delete Available local-storage 3m9s
persistentvolume/pv-local-2 20Gi RWO Delete Available local-storage 3m8s
persistentvolume/pv-local-3 20Gi RWO Delete Available local-storage 3m8s
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 21s
接下來,就需要配置 etcd helm chart
cd apisix/charts/etcd/
vim values.yaml
18 storageClass: "local-storage" # 18行,定義storageClass 為:local-storage
這裡還需要考慮一個問題,我這裡是三物理節點的叢集,因為 master
不參與 pod
排程,因此這裡無法元件三節點的 etcd
叢集,因此需要能夠將pod排程到master節點,進行如下配置:
#設定容忍所有汙點的key,即允許排程到 master節點
vim values.yaml
452 tolerations:
453 - operator: "Exists"
apisix節點採用daemonSet
預設,apisix
pod 採用的是 Deployment
控制器,需要修改為 daemonSet
,這樣從每個物理節點都可訪問到 ingress controller
cd apisix/
vim values.yaml
#日誌為true,啟用 daemonSet控制器
useDaemonSet: true
...
#設定容忍所有汙點,即可排程到master節點
tolerations:
- operator: "Exists"
...
#開啟dashboard
dashboard:
enabled: true
...
#設定基於kubernetes的服務發現
...
envs:
- KUBERNETES_SERVICE_HOST: "kubernetes.default.svc.cluster.local"
- KUBERNETES_SERVICE_PORT: "443"
...
rbac:
create: true
...
discovery:
enabled: true
registry:
kubernetes:
service:
schema: https
host: ${KUBERNETES_SERVICE_HOST}
port: ${KUBERNETES_SERVICE_PORT}
//是否需要這個token
client:
token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
namespace_selector:
equal: default
shared_size: 1m
watch_endpoint_slices: false
#設定 ingress-controller
ingress-controller:
enabled: true
config:
kubernetes:
enableGatewayAPI: true
apisix:
adminAPIVersion: "v3"
serviceNamespace: ingress-apisix
執行helm安裝
helm install apisix . --namespace ingress-apisix --create-namespace -f values.yaml
NAME: apisix
LAST DEPLOYED: Wed Apr 24 11:21:11 2024
NAMESPACE: ingress-apisix
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace ingress-apisix -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gateway)
export NODE_IP=$(kubectl get nodes --namespace ingress-apisix -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
檢視pod及服務
kubectl get po -n ingress-apisix -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
apisix-dashboard-9f6696d8f-z5f9x 1/1 Running 4 (3m36s ago) 4m47s 10.244.1.4 k8s-node02 <none> <none>
apisix-wbx79 1/1 Running 0 20s 10.244.0.8 k8s-master <none> <none>
apisix-7nt8t 1/1 Running 0 4m47s 10.244.2.3 k8s-node01 <none> <none>
apisix-jgqfn 1/1 Running 0 72s 10.244.1.8 k8s-node02 <none> <none>
apisix-etcd-1 1/1 Running 0 39s 10.244.0.7 k8s-master <none> <none>
apisix-etcd-0 1/1 Running 0 4m47s 10.244.2.4 k8s-node01 <none> <none>
apisix-etcd-2 1/1 Running 0 101s 10.244.1.7 k8s-node02 <none> <none>
apisix-ingress-controller-7dd4cd4f5-9pbn6 1/1 Running 0 102s 10.244.2.5 k8s-node01 <none> <none>
手動整理下,可以看到 pod
中 etcd
和apisix
都實現了三個節點,每個節點一個Pod的需求。
服務
kubectl get svc -n ingress-apisix
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apisix-admin ClusterIP 10.104.110.134 <none> 9180/TCP 8m30s
apisix-dashboard ClusterIP 10.104.148.32 <none> 80/TCP 8m30s
apisix-etcd ClusterIP 10.103.56.180 <none> 2379/TCP,2380/TCP 8m30s
apisix-etcd-headless ClusterIP None <none> 2379/TCP,2380/TCP 8m30s
apisix-gateway NodePort 10.110.254.20 <none> 80:30952/TCP 8m30s
apisix-ingress-controller ClusterIP 10.101.74.8 <none> 80/TCP 5m26s
apisix-ingress-controller-apisix-gateway NodePort 10.106.101.32 <none> 80:32029/TCP,443:30677/TCP 5m26s
apisix修改gateway監聽80埠
在沒有LB的情況下,一般希望 gateway
會監聽到80 或者 443 埠,這裡就需要進行如下修改。
最好不要去直接修改控制器,而是修改 chart
然後進行 upgrade
vim apisix/templates/deployment.yaml
...
ports:
- name: http
containerPort: {{ .Values.service.http.containerPort }}
hostPort: {{ .Values.service.http.hostPort }} #直接使用Pod的 hostport進行埠對映
protocol: TCP
{{- range .Values.service.http.additionalContainerPorts }}
- name: http-{{ .port | toString }}
containerPort: {{ .port }}
protocol: TCP
{{- end }}
- name: tls
containerPort: {{ .Values.apisix.ssl.containerPort }}
hostPort: {{ .Values.apisix.ssl.hostPort }} #直接使用Pod的 hostport進行埠對映
protocol: TCP
...
然後在 values.yaml進行定義:
vim apisix/values.yaml
http:
enabled: true
servicePort: 80
hostPort: 80
containerPort: 9080
# -- Support multiple http ports, See [Configuration](https://github.com/apache/apisix/blob/0bc65ea9acd726f79f80ae0abd8f50b7eb172e3d/conf/config-default.yaml#L24)
additionalContainerPorts: []
# - port: 9081
# enable_http2: true # If not set, the default value is `false`.
# - ip: 127.0.0.2 # Specific IP, If not set, the default value is `0.0.0.0`.
# port: 9082
# enable_http2: true
# -- Apache APISIX service settings for tls
tls:
servicePort: 443
hostPort: 443
進行上面修改後,升級chart
cd apisix/
helm upgrade apisix . --namespace ingress-apisix --create-namespace -f values.yaml
Release "apisix" has been upgraded. Happy Helming!
NAME: apisix
LAST DEPLOYED: Thu Apr 25 16:15:51 2024
NAMESPACE: ingress-apisix
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace ingress-apisix -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gateway)
export NODE_IP=$(kubectl get nodes --namespace ingress-apisix -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
瀏覽器訪問80 埠
到此,gateway
已經開啟了監聽80
埠。
apisix修改dashboard監聽9000
將 dashborad
修改為監聽9000 埠,還是如上操作即可。
vim apisix/charts/apisix-dashboard/templates/deployment.yaml
...
ports:
- name: http
containerPort: {{ .Values.config.conf.listen.port }}
hostPort: {{ .Values.config.conf.listen.hostPort }}
...
修改 values.yaml
vim apisix/charts/apisix-dashboard/values.yaml
...
config:
conf:
listen:
# -- The address on which the Manager API should listen.
# The default value is 0.0.0.0, if want to specify, please enable it.
# This value accepts IPv4, IPv6, and hostname.
host: 0.0.0.0
# -- The port on which the Manager API should listen.
port: 9000
hostPort: 9000
...
進行上面修改後,升級chart
cd apisix/
helm upgrade apisix . --namespace ingress-apisix --create-namespace -f values.yaml
Release "apisix" has been upgraded. Happy Helming!
NAME: apisix
LAST DEPLOYED: Thu Apr 25 16:25:27 2024
NAMESPACE: ingress-apisix
STATUS: deployed
REVISION: 3
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace ingress-apisix -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gateway)
export NODE_IP=$(kubectl get nodes --namespace ingress-apisix -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
透過瀏覽器訪問9000
,首先確定 dashboard
Pod執行在那個物理節點
kubectl get po -n ingress-apisix -o wide | egrep dashboard
apisix-dashboard-fd4d9fdc8-wrdnv 1/1 Running 0 69s 10.244.2.7 k8s-node02 <none> <none>
執行在 k8s-node02
IP是 192.168.199.103
瀏覽器訪問
預設使用者名稱:admin 密碼:admin
到此,可以透過 dashboard 直接配置路由規則。
透過dashboard 配置路由規則
建立測試Pod
kubectl create deployment ngx --image nginx:alpine --replicas 2 --port 80
kubectl expose deployment ngx --port 80 --target-port 80 --name ngx
dashboard配置規則
點選下一步
然後只需下一步、下一步、提交就好了。
訪問成功,可自行修改下Pod中的頁面,然後重新整理檢視是否輪詢。