從程式碼到部署微服務實戰(一)

kevwan發表於2021-01-10

當前微服務已經成為服務端開發的主流架構,而Go語言因其簡單易學、內建高併發、快速編譯、佔用記憶體小等特點也越來越受到開發者的青睞,微服務實戰系列文章將從實戰的角度和大家一起學習微服務相關的知識。本系列文章將以一個“部落格系統”由淺入深的和大家一起一步步搭建起一個完整的微服務系統

該篇文章為微服務實戰系列的第一篇文章,我們將基於go-zero+gitlab+jenkins+k8s構建微服務持續整合和自動構建釋出系統,先對以上模組做一個簡單介紹:

  • go-zero 是一個整合了各種工程實踐的 web 和 rpc 框架。通過彈性設計保障了大併發服務端的穩定性,經受了充分的實戰檢驗
  • gitlab 是一款基於 Git 的完全整合的軟體開發平臺,另外,GitLab 且具有wiki以及線上編輯、issue跟蹤功能、CI/CD 等功能
  • jenkins 是基於Java開發的一種持續整合工具,用於監控持續重複的工作,旨在提供一個開放易用的軟體平臺,使軟體的持續整合變成可能
  • kubernetes 常簡稱為K8s,是用於自動部署、擴充套件和管理容器化應用程式”的開源系統。該系統由Google設計並捐贈給Cloud Native Computing Foundation(今屬Linux基金會)來使用。它旨在提供“跨主機叢集的自動部署、擴充套件以及執行應用程式容器的平臺

devops1

實戰主要分為五個步驟,下面針對以下的五個步驟分別進行詳細的講解

  1. 第一步環境搭建,這裡我採用了兩臺ubuntu16.04伺服器分別安裝了gitlab和jenkins,採用xxx雲彈性k8s叢集
  2. 第二步生成專案,這裡我採用go-zero提供的goctl工具快速生成專案,並對專案做簡單的修改以便測試
  3. 第三部生成Dockerfile和k8s部署檔案,k8s部署檔案編寫複雜而且容易出錯,goctl工具提供了生成Dockerfile和k8s部署檔案的功能非常的方便
  4. Jenkins Pipeline採用宣告式語法構建,建立Jenkinsfile檔案並使用gitlab進行版本管理
  5. 最後進行專案測試驗證服務是否正常
experiment_step

環境搭建

首先我們搭建實驗環境,這裡我採用了兩臺ubuntu16.04伺服器,分別安裝了gitlab和jenkins。gtilab使用apt-get直接安裝,安裝好後啟動服務並檢視服務狀態,各元件為run狀態說明服務已經啟動,預設埠為9090直接訪問即可

gitlab-ctl start  // 啟動服務

gitlab-ctl status // 檢視服務狀態

run: alertmanager: (pid 1591) 15442s; run: log: (pid 2087) 439266s
run: gitaly: (pid 1615) 15442s; run: log: (pid 2076) 439266s
run: gitlab-exporter: (pid 1645) 15442s; run: log: (pid 2084) 439266s
run: gitlab-workhorse: (pid 1657) 15441s; run: log: (pid 2083) 439266s
run: grafana: (pid 1670) 15441s; run: log: (pid 2082) 439266s
run: logrotate: (pid 5873) 1040s; run: log: (pid 2081) 439266s
run: nginx: (pid 1694) 15440s; run: log: (pid 2080) 439266s
run: node-exporter: (pid 1701) 15439s; run: log: (pid 2088) 439266s
run: postgres-exporter: (pid 1708) 15439s; run: log: (pid 2079) 439266s
run: postgresql: (pid 1791) 15439s; run: log: (pid 2075) 439266s
run: prometheus: (pid 10763) 12s; run: log: (pid 2077) 439266s
run: puma: (pid 1816) 15438s; run: log: (pid 2078) 439266s
run: redis: (pid 1821) 15437s; run: log: (pid 2086) 439266s
run: redis-exporter: (pid 1826) 15437s; run: log: (pid 2089) 439266s
run: sidekiq: (pid 1835) 15436s; run: log: (pid 2104) 439266s

jenkins也是用apt-get直接安裝,需要注意的是安裝jenkins前需要先安裝java,過程比較簡單這裡就不再演示,jenkins預設埠為8080,預設賬號為admin,初始密碼路徑為/var/lib/jenkins/secrets/initialAdminPassword,初始化安裝推薦的外掛即可,後面可以根據自己的需要再安裝其它外掛

k8s叢集搭建過程比較複雜,雖然可以使用kubeadm等工具快速搭建,但距離真正的生產級叢集還是有一定差距,因為我們的服務最終是要上生產的,所以這裡我選擇了xxx雲的彈性k8s叢集版本為1.16.9,彈性叢集的好處是按需收費沒有額外的費用,當我們實驗完成後通過kubectl delete立馬釋放資源只會產生很少的費用,而且xxx雲的k8s叢集給我們提供了友好的監控頁面,可以通過這些介面檢視各種統計資訊,叢集建立好後需要建立叢集訪問憑證才能訪問叢集

  • 若當前訪問客戶端尚未配置任何叢集的訪問憑證,即 ~/.kube/config 內容為空,可直接將訪問憑證內容並貼上入 ~/.kube/config 中

  • 若當前訪問客戶端已配置了其他叢集的訪問憑證,需要通過如下命令合併憑證

    KUBECONFIG=~/.kube/config:~/Downloads/k8s-cluster-config kubectl config view --merge --flatten > ~/.kube/config
    export KUBECONFIG=~/.kube/config
    

配置好訪問許可權後通過如下命令可檢視當前叢集

kubectl config current-context

檢視叢集版本,輸出內容如下

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:44:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.9-eks.2", GitCommit:"f999b99a13f40233fc5f875f0607448a759fc613", GitTreeState:"clean", BuildDate:"2020-10-09T12:54:13Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

到這裡我們的試驗已經搭建完成了,這裡版本管理也可以使用github

生成專案

整個專案採用大倉的方式,目錄結構如下,最外層專案命名為blog,app目錄下為按照業務拆分成的不同的微服務,比如user服務下面又分為api服務和rpc服務,其中api服務為聚合閘道器對外提供restful介面,而rpc服務為內部通訊提供高效能的資料快取等操作

   ├── blog
   │   ├── app
   │   │   ├── user
   │   │   │   ├── api
   │   │   │   └── rpc
   │   │   ├── article
   │   │   │   ├── api
   │   │   │   └── rpc

專案目錄建立好之後我們進入api目錄建立user.api檔案,檔案內容如下,定義服務埠為2233,同時定義了一個/user/info介面

type UserInfoRequest struct {
	Uid int64 `form:"uid"`
}

type UserInfoResponse struct {
	Uid   int64  `json:"uid"`
	Name  string `json:"name"`
	Level int    `json:"level"`
}

@server(
	port: 2233
)
service user-api {
	@doc(
		summary:  獲取使用者資訊
	)
	@server(
		handler:  UserInfo
	)
	get /user/info(UserInfoRequest) returns(UserInfoResponse)
}

定義好api檔案之後我們執行如下命令生成api服務程式碼,一鍵生成真是能大大提升我們的生產力呢

goctl api go -api user.api -dir .

程式碼生成後我們對程式碼稍作改造以便後面部署後方便進行測試,改造後的程式碼為返回本機的ip地址

func (ul *UserInfoLogic) UserInfo(req types.UserInfoRequest) (*types.UserInfoResponse, error) {
	addrs, err := net.InterfaceAddrs()
	if err != nil {
		return nil, err
	}
	var name string
	for _, addr := range addrs {
		if ipnet, ok := addr.(*net.IPNet); ok && !ipnet.IP.IsLoopback() && ipnet.IP.To4() != nil {
			name = ipnet.IP.String()
		}
	}

	return &types.UserInfoResponse{
		Uid:   req.Uid,
		Name:  name,
		Level: 666,
	}, nil
}

到這裡服務生成部分就完成了,因為本節為基礎框架的搭建所以只是新增一些測試的程式碼,後面會繼續豐富專案程式碼

生成映象和部署檔案

一般的常用映象比如mysql、memcache等我們可以直接從映象倉庫拉取,但是我們的服務映象需要我們自定義,自定義映象有多重方式而使用Dockerfile則是使用最多的一種方式,使用Dockerfile定義映象雖然不難但是也很容易出錯,所以這裡我們也藉助工具來自動生成,這裡就不得不再誇一下goctl這個工具真的是棒棒的,還能幫助我們一鍵生成Dockerfile呢,在api目錄下執行如下命令

goctl docker -go user.go

生成後的檔案稍作改動以符合我們的目錄結構,檔案內容如下,採用了兩階段構建,第一階段構建可執行檔案確保構建獨立於宿主機,第二階段會引用第一階段構建的結果,最終構建出極簡映象

FROM golang:alpine AS builder

LABEL stage=gobuilder

ENV CGO_ENABLED 0
ENV GOOS linux
ENV GOPROXY https://goproxy.cn,direct

WORKDIR /build/zero

RUN go mod init blog/app/user/api
RUN go mod download
COPY . .
COPY /etc /app/etc
RUN go build -ldflags="-s -w" -o /app/user user.go


FROM alpine

RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
ENV TZ Asia/Shanghai

WORKDIR /app
COPY --from=builder /app/user /app/user
COPY --from=builder /app/etc /app/etc

CMD ["./user", "-f", "etc/user-api.yaml"]

然後執行如下命令建立映象

docker build -t user:v1 app/user/api/

這個時候我們使用docker images命令檢視映象會發現user映象已經建立,版本為v1

REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
user                                  v1                  1c1f64579b40        4 days ago          17.2MB

同樣,k8s的部署檔案編寫也比較複雜很容易出錯,所以我們也使用goctl自動來生成,在api目錄下執行如下命令

goctl kube deploy -name user-api -namespace blog -image user:v1 -o user.yaml -port 2233

生成的ymal檔案如下

apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-api
  namespace: blog
  labels:
    app: user-api
spec:
  replicas: 2
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: user-api
  template:
    metadata:
      labels:
        app: user-api
    spec:
      containers:
      - name: user-api
        image: user:v1
        lifecycle:
          preStop:
            exec:
              command: ["sh","-c","sleep 5"]
        ports:
        - containerPort: 2233
        readinessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 15
          periodSeconds: 10
        resources:
          requests:
            cpu: 500m
            memory: 512Mi
          limits:
            cpu: 1000m
            memory: 1024Mi

到此生成映象和k8s部署檔案步驟已經結束了,上面主要是為了演示,真正的生產環境中都是通過持續整合工具自動建立映象的

Jenkins Pipeline

jenkins是常用的繼續整合工具,其提供了多種構建方式,而pipeline是最常用的構建方式之一,pipeline支援聲名式和指令碼式兩種方式,指令碼式語法靈活、可擴充套件,但也意味著更復雜,而且需要學習Grovvy語言,增加了學習成本,所以才有了宣告式語法,聲名式語法是一種更簡單,更結構化的語法,我們後面也都會使用聲名式語法

這裡再介紹下Jenkinsfile,其實Jenkinsfile就是一個純文字檔案,也就是部署流水線概念在Jenkins中的表現形式,就像Dockerfile之於Docker,所有的部署流水線邏輯都可在Jenkinsfile檔案中定義,需要注意,Jenkins預設是不支援Jenkinsfile的,我們需要安裝Pipeline外掛,安裝外掛的流程為Manage Jenkins -> Manager Plugins 然後搜尋安裝即可,之後便可構建pipeline了

pipeline_build

我們可以直接在pipeline的介面中輸入構建指令碼,但是這樣沒法做到版本化,所以如果不是臨時測試的話是不推薦這種方式的,更通用的方式是讓jenkins從git倉庫中拉取Jenkinsfile並執行

首先需要安裝Git外掛,然後使用ssh clone方式拉取程式碼,所以,需要將git私鑰放到jenkins中,這樣jenkins才有許可權從git倉庫拉取程式碼

將git私鑰放到jenkins中的步驟是:Manage Jenkins -> Manage credentials -> 新增憑據,型別選擇為SSH Username with private key,接下來按照提示進行設定就可以了,如下圖所示

gitlab_tokens

然後在我們的gitlab中新建一個專案,只需要一個Jenkinsfile檔案

gitlab_jenkinsfile

在user-api專案中流水線定義選擇Pipeline script from SCM,新增gitlab ssh地址和對應的token,如下圖所示

jenkinsfile_responsitory

接著我們就可以按照上面的實戰步驟進行Jenkinsfile檔案的編寫了

  • 從gitlab拉取程式碼,從我們的gitlab倉庫中拉取程式碼,並使用commit_id用來區分不同版本

    stage('從gitlab拉取服務程式碼') {
    	steps {
    		echo '從gitlab拉取服務程式碼'
    		git credentialsId: 'xxxxxxxx', url: 'http://xxx.xxx.xxx.xxx:xxx/blog/blog.git'
    		script {
    		    commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
    		}
    	}
    }
    
  • 構建docker映象,使用goctl生成的Dockerfile檔案構建映象

    stage('構建映象') {
        steps {
            echo '構建映象'
            sh "docker build -t user:${commit_id} app/user/api/"
        }
    }
    
  • 上傳映象到映象倉庫,把生產的映象push到映象倉庫

    stage('上傳映象到映象倉庫') {
        steps {
            echo "上傳映象到映象倉庫"
            sh "docker login -u xxx -p xxxxxxx"
            sh "docker tag user:${commit_id} xxx/user:${commit_id}"
            sh "docker push xxx/user:${commit_id}"
        }
    }
    
  • 部署到k8s,把部署檔案中的版本號替換,從遠端拉取鏡,使用kubectl apply命令進行部署

    stage('部署到k8s') {
        steps {
            echo "部署到k8s"
            sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml"
            sh "cp app/user/api/user.yaml ."
            sh "kubectl apply -f user.yaml"
        }
    }
    

    完整的Jenkinsfile檔案如下

    pipeline {
    	agent any
    
    	stages {
    		stage('從gitlab拉取服務程式碼') {
    			steps {
    				echo '從gitlab拉取服務程式碼'
    				git credentialsId: 'xxxxxx', url: 'http://xxx.xxx.xxx.xxx:9090/blog/blog.git'
    				script {
    				    commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
    				}
    			}
    		}
    		stage('構建映象') {
    		    steps {
    		        echo '構建映象'
    		        sh "docker build -t user:${commit_id} app/user/api/"
    		    }
    		}
    		stage('上傳映象到映象倉庫') {
    		    steps {
    		        echo "上傳映象到映象倉庫"
    		        sh "docker login -u xxx -p xxxxxxxx"
    		        sh "docker tag user:${commit_id} xxx/user:${commit_id}"
    		        sh "docker push xxx/user:${commit_id}"
    		    }
    		}
    		stage('部署到k8s') {
    		    steps {
    		        echo "部署到k8s"
                    sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml"
                    sh "kubectl apply -f app/user/api/user.yaml"
    		    }
    		}
    	}
    }
    

    到這裡所有的配置基本完畢,我們的基礎框架也基本搭建完成,下面開始執行pipeline,點選左側的立即構建在下面Build History中就回產生一個構建歷史序列號,點選對應的序列號然後點選左側的Console Output即可檢視構建過程的詳細資訊,如果構建過程出現錯誤也會在這裡輸出

    buid_step

構建詳細輸出如下,pipeline對應的每一個stage都有詳細的輸出

Started by user admin
Obtained Jenkinsfile from git git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/user-api
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential gitlab_token
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git # timeout=10
Fetching upstream changes from git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_SSH to set credentials 
 > git fetch --tags --progress git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision 77eac3a4ca1a5b6aea705159ce26523ddd179bdf (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
Commit message: "add"
 > git rev-list --no-walk 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (從gitlab拉取服務程式碼)
[Pipeline] echo
從gitlab拉取服務程式碼
[Pipeline] git
The recommended git tool is: NONE
using credential gitlab_user_pwd
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://xxx.xxx.xxx.xxx:9090/blog/blog.git # timeout=10
Fetching upstream changes from http://xxx.xxx.xxx.xxx:9090/blog/blog.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_ASKPASS to set credentials 
 > git fetch --tags --progress http://xxx.xxx.xxx.xxx:9090/blog/blog.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision b757e9eef0f34206414bdaa4debdefec5974c3f5 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
 > git branch -a -v --no-abbrev # timeout=10
 > git branch -D master # timeout=10
 > git checkout -b master b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
Commit message: "Merge branch 'blog/dev' into 'master'"
 > git rev-list --no-walk b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ git rev-parse --short HEAD
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (構建映象)
[Pipeline] echo
構建映象
[Pipeline] sh
+ docker build -t user:b757e9e app/user/api/
Sending build context to Docker daemon  28.16kB

Step 1/18 : FROM golang:alpine AS builder
alpine: Pulling from library/golang
801bfaa63ef2: Pulling fs layer
ee0a1ba97153: Pulling fs layer
1db7f31c0ee6: Pulling fs layer
ecebeec079cf: Pulling fs layer
63b48972323a: Pulling fs layer
ecebeec079cf: Waiting
63b48972323a: Waiting
1db7f31c0ee6: Verifying Checksum
1db7f31c0ee6: Download complete
ee0a1ba97153: Verifying Checksum
ee0a1ba97153: Download complete
63b48972323a: Verifying Checksum
63b48972323a: Download complete
801bfaa63ef2: Verifying Checksum
801bfaa63ef2: Download complete
801bfaa63ef2: Pull complete
ee0a1ba97153: Pull complete
1db7f31c0ee6: Pull complete
ecebeec079cf: Verifying Checksum
ecebeec079cf: Download complete
ecebeec079cf: Pull complete
63b48972323a: Pull complete
Digest: sha256:49b4eac11640066bc72c74b70202478b7d431c7d8918e0973d6e4aeb8b3129d2
Status: Downloaded newer image for golang:alpine
 ---> 1463476d8605
Step 2/18 : LABEL stage=gobuilder
 ---> Running in c4f4dea39a32
Removing intermediate container c4f4dea39a32
 ---> c04bee317ea1
Step 3/18 : ENV CGO_ENABLED 0
 ---> Running in e8e848d64f71
Removing intermediate container e8e848d64f71
 ---> ff82ee26966d
Step 4/18 : ENV GOOS linux
 ---> Running in 58eb095128ac
Removing intermediate container 58eb095128ac
 ---> 825ab47146f5
Step 5/18 : ENV GOPROXY https://goproxy.cn,direct
 ---> Running in df2add4e39d5
Removing intermediate container df2add4e39d5
 ---> c31c1aebe5fa
Step 6/18 : WORKDIR /build/zero
 ---> Running in f2a1da3ca048
Removing intermediate container f2a1da3ca048
 ---> 5363d05f25f0
Step 7/18 : RUN go mod init blog/app/user/api
 ---> Running in 11d0adfa9d53
[91mgo: creating new go.mod: module blog/app/user/api
[0mRemoving intermediate container 11d0adfa9d53
 ---> 3314852f00fe
Step 8/18 : RUN go mod download
 ---> Running in aa9e9d9eb850
Removing intermediate container aa9e9d9eb850
 ---> a0f2a7ffe392
Step 9/18 : COPY . .
 ---> a807f60ed250
Step 10/18 : COPY /etc /app/etc
 ---> c4c5d9f15dc0
Step 11/18 : RUN go build -ldflags="-s -w" -o /app/user user.go
 ---> Running in a4321c3aa6e2
[91mgo: finding module for package github.com/tal-tech/go-zero/core/conf
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest/httpx
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/core/logx
[0m[91mgo: downloading github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: found github.com/tal-tech/go-zero/core/conf in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest/httpx in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/core/logx in github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: downloading gopkg.in/yaml.v2 v2.4.0
[0m[91mgo: downloading github.com/justinas/alice v1.2.0
[0m[91mgo: downloading github.com/dgrijalva/jwt-go v3.2.0+incompatible
[0m[91mgo: downloading go.uber.org/automaxprocs v1.3.0
[0m[91mgo: downloading github.com/spaolacci/murmur3 v1.1.0
[0m[91mgo: downloading github.com/google/uuid v1.1.1
[0m[91mgo: downloading google.golang.org/grpc v1.29.1
[0m[91mgo: downloading github.com/prometheus/client_golang v1.5.1
[0m[91mgo: downloading github.com/beorn7/perks v1.0.1
[0m[91mgo: downloading github.com/golang/protobuf v1.4.2
[0m[91mgo: downloading github.com/prometheus/common v0.9.1
[0m[91mgo: downloading github.com/cespare/xxhash/v2 v2.1.1
[0m[91mgo: downloading github.com/prometheus/client_model v0.2.0
[0m[91mgo: downloading github.com/prometheus/procfs v0.0.8
[0m[91mgo: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
[0m[91mgo: downloading google.golang.org/protobuf v1.25.0
[0mRemoving intermediate container a4321c3aa6e2
 ---> 99ac2cd5fa39
Step 12/18 : FROM alpine
latest: Pulling from library/alpine
801bfaa63ef2: Already exists
Digest: sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436
Status: Downloaded newer image for alpine:latest
 ---> 389fef711851
Step 13/18 : RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
 ---> Running in 51694dcb96b6
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
v3.12.3-38-g9ff116e4f0 [http://dl-cdn.alpinelinux.org/alpine/v3.12/main]
v3.12.3-39-ge9195171b7 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community]
OK: 12746 distinct packages available
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r4)
(2/2) Installing tzdata (2020f-r0)
Executing busybox-1.31.1-r19.trigger
Executing ca-certificates-20191127-r4.trigger
OK: 10 MiB in 16 packages
Removing intermediate container 51694dcb96b6
 ---> e5fb2e4d5eea
Step 14/18 : ENV TZ Asia/Shanghai
 ---> Running in 332fd0df28b5
Removing intermediate container 332fd0df28b5
 ---> 11c0e2e49e46
Step 15/18 : WORKDIR /app
 ---> Running in 26e22103c8b7
Removing intermediate container 26e22103c8b7
 ---> 11d11c5ea040
Step 16/18 : COPY --from=builder /app/user /app/user
 ---> f69f19ffc225
Step 17/18 : COPY --from=builder /app/etc /app/etc
 ---> b8e69b663683
Step 18/18 : CMD ["./user", "-f", "etc/user-api.yaml"]
 ---> Running in 9062b0ed752f
Removing intermediate container 9062b0ed752f
 ---> 4867b4994e43
Successfully built 4867b4994e43
Successfully tagged user:b757e9e
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (上傳映象到映象倉庫)
[Pipeline] echo
上傳映象到映象倉庫
[Pipeline] sh
+ docker login -u xxx -p xxxxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[Pipeline] sh
+ docker tag user:b757e9e xxx/user:b757e9e
[Pipeline] sh
+ docker push xxx/user:b757e9e
The push refers to repository [docker.io/xxx/user]
b19a970f64b9: Preparing
f695b957e209: Preparing
ee27c5ca36b5: Preparing
7da914ecb8b0: Preparing
777b2c648970: Preparing
777b2c648970: Layer already exists
ee27c5ca36b5: Pushed
b19a970f64b9: Pushed
7da914ecb8b0: Pushed
f695b957e209: Pushed
b757e9e: digest: sha256:6ce02f8a56fb19030bb7a1a6a78c1a7c68ad43929ffa2d4accef9c7437ebc197 size: 1362
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (部署到k8s)
[Pipeline] echo
部署到k8s
[Pipeline] sh
+ sed -i s/<COMMIT_ID_TAG>/b757e9e/ app/user/api/user.yaml
[Pipeline] sh
+ kubectl apply -f app/user/api/user.yaml
deployment.apps/user-api created
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

可以看到最後輸出了SUCCESS說明我們的pipeline已經成了,這個時候我們可以通過kubectl工具檢視一下,-n引數為指定namespace

kubectl get pods -n blog

NAME                       READY   STATUS    RESTARTS   AGE
user-api-84ffd5b7b-c8c5w   1/1     Running   0          10m
user-api-84ffd5b7b-pmh92   1/1     Running   0          10m

我們在k8s部署檔案中制定了名稱空間為blog,所以在執行pipeline之前我們需要先建立這個namespance

kubectl create namespace blog

服務已經部署好了,那麼接下來怎麼從外部訪問服務呢?這裡使用LoadBalancer方式,Service部署檔案定義如下,80埠對映到容器的2233埠上,selector用來匹配Deployment中定義的label

apiVersion: v1
kind: Service
metadata:
  name: user-api-service
  namespace: blog
spec:
  selector:
    app: user-api
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 2233

執行建立service,建立完後檢視service輸出如下,注意一定要加上-n引數指定namespace

kubectl apply -f user-service.yaml
kubectl get services -n blog

NAME               TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
user-api-service   LoadBalancer   <none>       xxx.xxx.xxx.xx   80:32470/TCP   79m

這裡的EXTERNAL-IP 即為提供給外網訪問的ip,埠為80

到這裡我們的所有的部署任務都完成了,大家最好也能親自動手來實踐一下

測試

最後我們來測試下部署的服務是否正常,使用EXTERNAL-IP來進行訪問

curl "http://xxx.xxx.xxx.xxx:80/user/info?uid=1"

{"uid":1,"name":"172.17.0.5","level":666}

curl http://xxx.xxx.xxx.xxx:80/user/info\?uid\=1

{"uid":1,"name":"172.17.0.8","level":666}

curl訪問了兩次/user/info介面,都能正常返回,說明我們的服務沒有問題,name欄位分別輸出了兩個不同ip,可以看出LoadBalancer預設採用了Round Robin的負載均衡策略

總結

以上我們實現了從程式碼開發到版本管理再到構建部署的DevOps全流程,完成了基礎架構的搭建,當然這個架構現在依然很簡陋。在本系列後續中,我們將以這個部落格系統為基礎逐漸的完善整個架構,比如逐漸的完善CI、CD流程、增加監控、部落格系統功能的完善、高可用最佳實踐和其原理等等

工欲善其事必先利其器,好的工具能大大提升我們的工作效率而且能降低出錯的可能,上面我們大量使用了goctl工具簡直有點愛不釋手了哈哈哈,下次見

由於個人能力有限難免有表達有誤的地方,歡迎廣大觀眾姥爺批評指正!

專案地址

https://github.com/tal-tech/go-zero

歡迎使用並 star 支援我們!?

go-zero 系列文章見『微服務實踐』公眾號

相關文章