本文依賴環境:Centos7部署Kubernetes叢集、基於Kubernetes叢集部署skyDNS服務
該示例中,我們將建立一個redis-master、兩個redis-slave、三個frontend。其中,slave會實時備份master中資料,frontend會向master中寫資料,之後會從slave中讀取資料。所有系統間的呼叫(例如slave找master同步資料;frontend找master寫資料;frontend找slave讀資料等),採用的是dns方式實現。
1、準備工作
1.1映象準備
本示例中依賴以下幾個映象,請提前準備好:
docker.io/redis:latest 1a8a9ee54eb7 registry.access.redhat.com/rhel7/pod-infrastructure:latest 34d3450d733b gcr.io/google_samples/gb-frontend:v3 c038466384ab gcr.io/google_samples/gb-redisslave:v1 5f026ddffa27 |
1.2環境準備
需要一套kubernetes執行環境,及Cluster DNS,如下:
[root@k8s-master ~]# kubectl cluster-info Kubernetes master is running at http://localhost:8080 KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get nodes NAME STATUS AGE k8s-node-1 Ready 7d k8s-node-2 Ready 7d [root@k8s-master ~]# kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 5d kube-system kubernetes-dashboard-latest 1 1 1 1 6d [root@k8s-master ~]#
2、執行redis-master
2.1yaml檔案
1)redis-master-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: redis-master labels: name: redis-master spec: replicas: 1 selector: name: redis-master template: metadata: labels: name: redis-master spec: containers: - name: master image: redis ports: - containerPort: 6379
2)redis-master-service.yaml
apiVersion: v1 kind: Service metadata: name: redis-master labels: name: redis-master spec: ports: # the port that this service should serve on - port: 6379 targetPort: 6379 selector: name: redis-master
2.2建立rc及service
Master上執行:
[root@k8s-master yaml]# kubectl create -f redis-master-controller.yaml replicationcontroller " redis-master" created [root@k8s-master yaml]# kubectl create -f redis-master-service.yaml service " redis-master" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 1 1d [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE redis-master-5wyku 1/1 Running 0 1d
3、執行redis-slave
3.1yaml檔案
1)redis-slave-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: redis-slave labels: name: redis-slave spec: replicas: 2 selector: name: redis-slave template: metadata: labels: name: redis-slave spec: containers: - name: worker image: gcr.io/google_samples/gb-redisslave:v1 env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 6379
2)redis-slave-service.yaml
apiVersion: v1 kind: Service metadata: name: redis-slave labels: name: redis-slave spec: ports: - port: 6379 selector: name: redis-slave
3.2建立rc及service
Master上執行:
[root@k8s-master yaml]# kubectl create -f redis-slave-controller.yaml replicationcontroller "redis-slave" created [root@k8s-master yaml]# kubectl create -f redis-slave-service.yaml service "redis-slave" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 1 1d redis-slave 2 2 2 44m [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE redis-master-5wyku 1/1 Running 0 1d redis-slave-7h295 1/1 Running 0 44m redis-slave-r355y 1/1 Running 0 44m
4、執行frontend
4.1yaml檔案
1)frontend-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: frontend labels: name: frontend spec: replicas: 3 selector: name: frontend template: metadata: labels: name: frontend spec: containers: - name: frontend image: gcr.io/google_samples/gb-frontend:v3 env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 80
2)frontend-service.yaml
apiVersion: v1 kind: Service metadata: name: frontend labels: name: fronted spec: type: NodePort ports: - port: 80 nodePort: 30001 selector: name: frontend
4.2建立rc及service
Master上執行:
[root@k8s-master yaml]# kubectl create -f frontend-controller.yaml replicationcontroller "frontend" created [root@k8s-master yaml]# kubectl create -f frontend-service.yaml service "frontend" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE frontend 3 3 3 28m redis-master 1 1 1 1d redis-slave 2 2 2 44m [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-ax654 1/1 Running 0 29m frontend-k8caj 1/1 Running 0 29m frontend-x6bhl 1/1 Running 0 29m redis-master-5wyku 1/1 Running 0 1d redis-slave-7h295 1/1 Running 0 44m redis-slave-r355y 1/1 Running 0 44m [root@k8s-master yaml]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend 10.254.93.91 <nodes> 80/TCP 47m kubernetes 10.254.0.1 <none> 443/TCP 7d redis-master 10.254.132.210 <none> 6379/TCP 1d redis-slave 10.254.104.23 <none> 6379/TCP 1h
4.3頁面驗證
至此,Guestbook已經執行在了kubernetes中了,但是外部是無法通過通過frontend-service的IP10.0.93.91這個IP來進行訪問的。Service的虛擬IP是kubernetes虛擬出來的內部網路,在外部網路中是無法定址到的,這時候就需要增加一層外網到內網的網路轉發。我們的示例中採用的是NodePort的方式實現的,之前在建立frontend-service時設定了nodePort: 30001,即kubernetes將會在每個Node上設定埠,成為NodePort,通過NodePort埠可以訪問到真正的服務。