HAproxy Ingress控制器
HAproxy Ingress簡介
HAProxy Ingress watches in the k8s cluster and how it builds HAProxy configuration
和Nginx相類似,HAproxy通過監視kubernetes api獲取到service後端pod的狀態,動態更新haproxy配置檔案,以實現七層的負載均衡。
HAproxy Ingress控制器具備的特性如下:
- Fast,Carefully built on top of the battle-tested HAProxy load balancer. 基於haproxy效能有保障
- Reliable,Trusted by sysadmins on clusters as big as 1,000 namespaces, 2,000 domains and 3,000 ingress objects. 可靠,支援1000最多1000個名稱空間和2000多個域名
- Highly customizable,100+ configuration options and growing. 可定製化強,支援100多個配置選項
- 通過使用一個IP地址和埠路由入口流量來簡化基礎架構。根據主機請求頭和請求路徑將請求路由到正確的Pod。
- 利用世界上最快,使用最廣泛的軟體負載平衡器HAProxy。在效能,可靠性和安全性方面,HAProxy設定了新標準。
- 使用內建的SSL終止,速率限制和IP白名單來保護您的叢集。
- 使用HAProxy的任何負載平衡演算法(包括迴圈,最少連線,URL雜湊和隨機)來平衡Pod之間的流量。
- 開箱即用的第7層卓越的可觀察性可儘早避免出現問題。HAProxy附帶有一個儀表板,該儀表板可顯示吊艙的執行狀況,當前請求率,響應時間等。
- HAProxy的流量過載保護可帶來更高的吞吐量。伺服器不會收到超出其處理能力的更多請求。
HAproxy ingress控制器版本
- 社群版,基於haproxy社群高度定製符合ingress的控制器,功能相對有限
- 企業版,haproxy企業版本,支援很多高階特性和功能,大部分高階功能在企業版本中實現
HAproxy控制器安裝
haproxy ingress安裝相對簡單,官方提供了安裝的yaml檔案,先將檔案下載檢視一下kubernetes資源配置,包含的資源型別有:
- ServiceAccount 和RBAC認證授權關聯
- RBAC認證 Role、ClusterRole、 ClusterRoleBinding
- Service 後端的一個service
- DaemonSet HAproxy最核心的一個控制器,關聯認證ServiceAccount和配置ConfigMap,定義了一個nodeSelector,label為role: ingress-controller,將執行在特定的節點上
- ConfigMap 實現haproxy ingress自定義配置
安裝檔案路徑https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
1、建立名稱空間,haproxy ingress部署在ingress-controller這個名稱空間,先建立ns
[root@node-1 ~]# kubectl create namespace ingress-controller
namespace/ingress-controller created
[root@node-1 ~]# kubectl get namespaces ingress-controller -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"ingress-controller"}}
creationTimestamp: "2020-08-11T13:32:47Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:status:
f:phase: {}
manager: kubectl
operation: Update
time: "2020-08-11T13:33:49Z"
name: ingress-controller
resourceVersion: "917773"
selfLink: /api/v1/namespaces/ingress-controller
uid: ba938974-cdde-4c2c-baf0-c8fb1f4d20b1
spec:
finalizers:
- kubernetes
status:
phase: Active
2、安裝haproxy ingress控制器
[root@node-1 ~]# wget https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
[root@node-1 ~]# kubectl apply -f haproxy-ingress.yaml
serviceaccount/ingress-controller created
clusterrole.rbac.authorization.k8s.io/ingress-controller created
role.rbac.authorization.k8s.io/ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created
rolebinding.rbac.authorization.k8s.io/ingress-controller created
deployment.apps/ingress-default-backend created
service/ingress-default-backend created
configmap/haproxy-ingress created
daemonset.apps/haproxy-ingress created
3、 檢查haproxy ingress安裝情況,檢查haproxy ingress核心的DaemonSets,發現DS並未部署Pod,原因是配置檔案中定義了nodeSelector節點標籤選擇器,因此需要給node設定合理的標籤
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
haproxy-ingress 0 0 0 0 0 role=ingress-controller 5m51s
4、 給node設定標籤,讓DaemonSets管理的Pod能排程到node節點上,生產環境中根據情況定義,將實現haproxy ingress功能的節點定義到特定的節點,對個node節點的訪問,需要藉助於負載均衡實現統一接入,本文主要以探究haproxy ingress功能,因此未部署負載均衡排程器,可根據實際的情況部署。以node-1和node-2為例:
[root@node-1 ~]# kubectl label node node-1 role=ingress-controller
node/node-1 labeled
[root@node-1 ~]# kubectl label node node-2 role=ingress-controller
node/node-2 labeled
#檢視labels的情況
[root@node-1 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-1 Ready master 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=ingress-controller
node-2 Ready <none> 104d v1.15.3 app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux,label=test,role=ingress-controller
node-3 Ready <none> 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux
5、再次檢視ingress部署情況,已完成部署,並排程至node-1和node-2節點上
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
haproxy-ingress 2 2 2 2 2 role=ingress-controller 15m
[root@node-1 ~]# kubectl get pods -n ingress-controller -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
haproxy-ingress-bdns8 1/1 Running 0 2m27s 10.254.100.102 node-2 <none> <none>
haproxy-ingress-d5rnl 1/1 Running 0 2m31s 10.254.100.101 node-1 <none> <none>
6、 檢視haproxy ingress的日誌,通過查詢日誌可知,多個haproxy ingress是通過選舉實現高可用HA機制。
kubectl logs -n ingress-controller haproxy-ingress-vb9mp
其他資源包括ServiceAccount,ClusterRole,ConfigMaps請單獨確認,至此HAproxy ingress controller部署完畢。另外兩種部署方式:
haproxy ingress使用
haproxy ingress基礎
Ingress控制器部署完畢後需要定義Ingress規則,以方便Ingress控制器能夠識別到service後端Pod的資源,這個章節我們將來介紹在HAproxy Ingress Controller環境下Ingress的使用。
1、環境準備,建立一個deployments並暴露其埠
#建立應用並暴露埠
[root@node-1 haproxy-ingress]# cat haproxy-ingress-nginx-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy-ingress-demo
name: haproxy-ingress-demo
spec:
replicas: 3
selector:
matchLabels:
app: haproxy-ingress-demo
template:
metadata:
labels:
app: haproxy-ingress-demo
spec:
containers:
- image: nginx:1.17.0
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: haproxy-ingress-demo
namespace: default
spec:
clusterIP: 10.109.197.67
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: haproxy-ingress-demo
type: ClusterIP
#檢視應用
[root@node-1 haproxy-ingress]# kubectl get deployments haproxy-ingress-demo
NAME READY UP-TO-DATE AVAILABLE AGE
haproxy-ingress-demo 1/1 1 1 10s
#檢視service情況
[root@node-1 haproxy-ingress]# kubectl get services haproxy-ingress-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress-demo ClusterIP 10.109.197.67 <none> 80/TCP 17s
2、建立ingress規則,如果有多個ingress控制器,可以通過ingress.class指定型別為haproxy
[root@node-1 haproxy-ingress]# cat ingress-demo.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: haproxy-ingress-demo
labels:
ingresscontroller: haproxy
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: www.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-ingress-demo
servicePort: 80
3、應用ingress規則,並檢視ingress詳情,檢視Events日誌發現控制器已正常更新
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-demo.yaml
ingress.extensions/haproxy-ingress-demo created
#檢視詳情
[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-demo
Name: haproxy-ingress-demo
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
www.happylau.cn
/ haproxy-ingress-demo:80 (10.244.2.166:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"labels":{"ingresscontroller":"haproxy"},"name":"haproxy-ingress-demo","namespace":"default"},"spec":{"rules":[{"host":"www.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-ingress-demo","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: haproxy
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo
Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo
Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo
Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo
4、測試驗證ingress規則,可以將域名寫入到hosts檔案中,我們直接使用gcurl測試,地址指向node-1或node-2均可
[root@node-1 haproxy-ingress]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.101
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
5、測試正常,接下來到haproxy ingress controller中剛檢視對應生成規則配置檔案
[root@node-1 ~]# kubectl exec -it haproxy-ingress-bdns8 -n ingress-controller /bin/sh
#檢視配置檔案
/etc/haproxy # cat /etc/haproxy/haproxy.cfg
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # HAProxy Ingress Controller
# # --------------------------
# # This file is automatically updated, do not edit
# #
# 全域性配置檔案內容
global
daemon
nbthread 2
cpu-map auto:1/1-2 0-1
stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners
maxconn 2000
hard-stop-after 10m
lua-load /usr/local/etc/haproxy/lua/send-response.lua
lua-load /usr/local/etc/haproxy/lua/auth-request.lua
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
ssl-default-bind-options no-sslv3 no-tls-tickets
#預設配置內容
defaults
log global
maxconn 2000
option redispatch
option dontlognull
option http-server-close
option http-keep-alive
timeout client 50s
timeout client-fin 50s
timeout connect 5s
timeout http-keep-alive 1m
timeout http-request 5s
timeout queue 5s
timeout server 50s
timeout server-fin 50s
timeout tunnel 1h
#後端伺服器,即通過service服務發現機制,和後端的Pod關聯
backend default_haproxy-ingress-demo_80
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 10.244.2.166:80 weight 1 check inter 2s #後端Pod的地址
server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
backend _error404
mode http
http-request use-service lua.send-404
#前端監聽的80埠轉發規則,並配置有https跳轉,對應的主機配置在/etc/haproxy/maps/_global_http_front.map檔案中定義
frontend _front_http
mode http
bind *:80
http-request set-var(req.base) base,lower,regsub(:[0-9]+/,/)
http-request redirect scheme https if { var(req.base),map_beg(/etc/haproxy/maps/_global_https_redir.map,_nomatch) yes }
http-request set-header X-Forwarded-Proto http
http-request del-header X-SSL-Client-CN
http-request del-header X-SSL-Client-DN
http-request del-header X-SSL-Client-SHA1
http-request del-header X-SSL-Client-Cert
http-request set-var(req.backend) var(req.base),map_beg(/etc/haproxy/maps/_global_http_front.map,_nomatch)
use_backend %[var(req.backend)] unless { var(req.backend) _nomatch }
default_backend _default_backend
#前端監聽的443轉發規則,對應域名在/etc/haproxy/maps/ _front001_host.map檔案中
frontend _front001
mode http
bind *:443 ssl alpn h2,http/1.1 crt /ingress-controller/ssl/default-fake-certificate.pem
http-request set-var(req.hostbackend) base,lower,regsub(:[0-9]+/,/),map_beg(/etc/haproxy/maps/_front001_host.map,_nomatch)
http-request set-header X-Forwarded-Proto https
http-request del-header X-SSL-Client-CN
http-request del-header X-SSL-Client-DN
http-request del-header X-SSL-Client-SHA1
http-request del-header X-SSL-Client-Cert
use_backend %[var(req.hostbackend)] unless { var(req.hostbackend) _nomatch }
default_backend _default_backend
#狀態監聽器
listen stats
mode http
bind *:1936
stats enable
stats uri /
no log
option forceclose
stats show-legends
#監控健康檢查
frontend healthz
mode http
bind *:10253
monitor-uri /healthz
檢視主機名隱射檔案,包含有前端主機名和轉發到後端backend的名稱
/etc/haproxy/maps # cat /etc/haproxy/maps/_global_http_front.map
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # HAProxy Ingress Controller
# # --------------------------
# # This file is automatically updated, do not edit
# #
#
www.happylau.cn/ default_haproxy-ingress-demo_80
通過上面的基礎配置可以實現基於haproxy的七層負載均衡實現,haproxy ingress controller通過kubernetes api動態識別到service後端規則配置並更新至haproxy.cfg配置檔案中,從而實現負載均衡功能實現。
動態更新和負載均衡
後端Pod是實時動態變化的,haproxy ingress通過service的服務發現機制,動態識別到後端Pod的變化情況,並動態更新haproxy.cfg配置檔案,並過載配置(實際不需要重啟haproxy服務),本章節將演示haproxy ingress動態更新和負載均衡功能。
1、動態更新,我們以擴容pod的副本為例,將副本數從replicas=1擴容至3個
[root@node-1 ~]# kubectl scale --replicas=3 deployment haproxy-ingress-demo
deployment.extensions/haproxy-ingress-demo scaled
[root@node-1 ~]# kubectl get deployments haproxy-ingress-demo
NAME READY UP-TO-DATE AVAILABLE AGE
haproxy-ingress-demo 3/3 3 3 43m
#檢視擴容後Pod的IP地址
[root@node-1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
haproxy-ingress-demo-5d487d4fc-5pgjt 1/1 Running 0 43m 10.244.2.166 node-3 <none> <none>
haproxy-ingress-demo-5d487d4fc-pst2q 1/1 Running 0 18s 10.244.0.52 node-1 <none> <none>
haproxy-ingress-demo-5d487d4fc-sr8tm 1/1 Running 0 18s 10.244.1.149 node-2 <none> <none>
2、檢視haproxy配置檔案內容,可以看到backend後端主機列表已動態發現新增的pod地址
backend default_haproxy-ingress-demo_80
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 10.244.2.166:80 weight 1 check inter 2s #新增的pod地址
server srv002 10.244.0.52:80 weight 1 check inter 2s
server srv003 10.244.1.149:80 weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
4、檢視haproxy ingress日誌,日誌中提示HAProxy updated without needing to reload,即配置動態識別,不需要重啟haproxy服務就能夠識別,自從1.8後haproxy能支援動態配置更新的能力,以適應微服務的場景,詳情檢視文章說明
[root@node-1 ~]# kubectl logs haproxy-ingress-bdns8 -n ingress-controller -f
I1227 12:21:11.523066 6 controller.go:274] Starting HAProxy update id=20
I1227 12:21:11.561001 6 instance.go:162] HAProxy updated without needing to reload. Commands sent: 3
I1227 12:21:11.561057 6 controller.go:325] Finish HAProxy update id=20: ingress=0.149764ms writeTmpl=37.738947ms total=37.888711ms
5、接下來測試負載均衡的功能,為了驗證測試效果,往pod中寫入不同的內容,以測試負載均衡的效果
[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-5pgjt /bin/bash
root@haproxy-ingress-demo-5d487d4fc-5pgjt:/# echo "web-1" > /usr/share/nginx/html/index.html
[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-pst2q /bin/bash
root@haproxy-ingress-demo-5d487d4fc-pst2q:/# echo "web-2" > /usr/share/nginx/html/index.html
[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-sr8tm /bin/bash
root@haproxy-ingress-demo-5d487d4fc-sr8tm:/# echo "web-3" > /usr/share/nginx/html/index.html
6、測試驗證負載均衡效果,haproxy採用輪詢的排程演算法,因此可以明顯看到輪詢效果
[root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
web-1
[root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
web-2
[root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
web-3
這個章節驗證了haproxy ingress控制器動態配置更新的能力,相比於nginx ingress控制器而言,haproxy ingress控制器不需要過載服務程式就能夠動態識別到配置,在微服務場景下將具有非常大的優勢;並通過一個例項驗證了ingress負載均衡排程能力。
基於名稱虛擬主機
這個小節將演示haproxy ingress基於虛擬雲主機功能的實現,定義兩個虛擬主機news.happylau.cn和sports.happylau.cn,將請求各自轉發至haproxy-1和haproxy-2
1、 準備環境測試環境,建立兩個應用haproxy-1和haproxy並暴露服務埠
[root@node-1 ~]# cat haproxy-ingress-nginx-test-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy-1
name: haproxy-1
spec:
replicas: 3
selector:
matchLabels:
app: haproxy-1
template:
metadata:
labels:
app: haproxy-1
spec:
containers:
- image: nginx:1.7.9
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: haproxy-1
namespace: default
spec:
clusterIP: 10.109.197.67
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: haproxy-1
type: ClusterIP
[root@node-1 ~]# cat haproxy-ingress-nginx-test-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy-2
name: haproxy-2
spec:
replicas: 3
selector:
matchLabels:
app: haproxy-2
template:
metadata:
labels:
app: haproxy-2
spec:
containers:
- image: nginx:1.7.9
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: haproxy-2
namespace: default
spec:
clusterIP: 10.109.197.68
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: haproxy-2
type: ClusterIP
檢視應用
[root@node-1 ~]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
haproxy-1 1/1 1 1 39s
haproxy-2 1/1 1 1 36s
檢視service
[root@node-1 ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-1 ClusterIP 10.109.197.67 <none> 80/TCP 55s
haproxy-2 ClusterIP 10.109.197.68 <none> 80/TCP 52s
3、定義ingress規則,定義不同的主機並將請求轉發至不同的service中
[root@node-1 ~]# cat ingress-virtualhost.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: haproxy-ingress-virtualhost
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: news.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-1
servicePort: 80
- host: sports.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-2
servicePort: 80
#應用ingress規則並檢視列表
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml
ingress.extensions/haproxy-ingress-virtualhost created
[root@node-1 haproxy-ingress]# kubectl get ingresses haproxy-ingress-virtualhost
NAME HOSTS ADDRESS PORTS AGE
haproxy-ingress-virtualhost news.happylau.cn,sports.happylau.cn 80 8s
檢視ingress規則詳情
[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost
Name: haproxy-ingress-virtualhost
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
news.happylau.cn
/ haproxy-1:80 (10.244.2.168:80)
sports.happylau.cn
/ haproxy-2:80 (10.244.2.169:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: haproxy
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost
4、測試驗證虛擬機器主機配置,通過curl直接解析的方式,或者通過寫hosts檔案
5、檢視配置配置檔案內容,配置中更新了haproxy.cfg的front段和backend段的內容
/etc/haproxy/haproxy.cfg 配置檔案內容
backend default_haproxy-1_80 #haproxy-1後端
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 10.244.2.168:80 weight 1 check inter 2s
server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
#haproxy-2後端
backend default_haproxy-2_80
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000"
server srv001 10.244.2.169:80 weight 1 check inter 2s
server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
配置關聯內容
/ # cat /etc/haproxy/maps/_global_http_front.map
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # HAProxy Ingress Controller
# # --------------------------
# # This file is automatically updated, do not edit
# #
#
news.happylau.cn/ default_haproxy-1_80
sports.happylau.cn/ default_haproxy-2_80
URL自動跳轉
haproxy ingress支援自動跳轉的能力,需要通過annotations定義,通過ingress.kubernetes.io/ssl-redirect設定即可,預設為false,設定為true即可實現http往https跳轉的能力,當然可以將配置寫入到ConfigMap中實現預設跳轉的能力,本文以編寫annotations為例,實現訪問http跳轉https的能力。
1、定義ingress規則,設定ingress.kubernetes.io/ssl-redirect實現跳轉功能
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: haproxy-ingress-virtualhost
annotations:
kubernetes.io/ingress.class: haproxy
ingress.kubernetes.io/ssl-redirect: true #實現跳轉功能
spec:
rules:
- host: news.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-1
servicePort: 80
- host: sports.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-2
servicePort: 80
按照上圖測試了一下功能,未能實現跳轉實現跳轉的功能,開源版本中未能找到更多文件說明,企業版由於映象需要認證授權下載,未能進一步做測試驗證。
基於TLS加密
haproxy ingress預設整合了一個
1、生成自簽名證書和私鑰
[root@node-1 haproxy-ingress]# openssl req -x509 -newkey rsa:2048 -nodes -days 365 -keyout tls.key -out tls.crt
Generating a 2048 bit RSA private key
...........+++
.......+++
writing new private key to 'tls.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:GD
Locality Name (eg, city) [Default City]:ShenZhen
Organization Name (eg, company) [Default Company Ltd]:Tencent
Organizational Unit Name (eg, section) []:HappyLau
Common Name (eg, your name or your server's hostname) []:www.happylau.cn
Email Address []:573302346@qq.com
2、建立Secrets,關聯證書和私鑰
[root@node-1 haproxy-ingress]# kubectl create secret tls haproxy-tls --cert=tls.crt --key=tls.key
secret/haproxy-tls created
[root@node-1 haproxy-ingress]# kubectl describe secrets haproxy-tls
Name: haproxy-tls
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 1424 bytes
tls.key: 1704 bytes
3、編寫ingress規則,通過tls關聯Secrets
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: haproxy-ingress-virtualhost
annotations:
kubernetes.io/ingress.class: haproxy
spec:
tls:
- hosts:
- news.happylau.cn
- sports.happylau.cn
secretName: haproxy-tls
rules:
- host: news.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-1
servicePort: 80
- host: sports.happylau.cn
http:
paths:
- path: /
backend:
serviceName: haproxy-2
servicePort: 80
4、應用配置並檢視詳情,在TLS中可以看到TLS關聯的證書
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml
ingress.extensions/haproxy-ingress-virtualhost configured
[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost
Name: haproxy-ingress-virtualhost
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
haproxy-tls terminates news.happylau.cn,sports.happylau.cn
Rules:
Host Path Backends
---- ---- --------
news.happylau.cn
/ haproxy-1:80 (10.244.2.168:80)
sports.happylau.cn
/ haproxy-2:80 (10.244.2.169:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["news.happylau.cn","sports.happylau.cn"],"secretName":"haproxy-tls"}]}}
kubernetes.io/ingress.class: haproxy
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost
Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost
5、測試https站點訪問,可以看到安全的https訪問
寫在最後
haproxy實現ingress實際是通過配置更新haproxy.cfg配置,結合service的服務發現機制動態完成ingress接入,相比於nginx來說,haproxy不需要過載實現配置變更。在測試haproxy ingress過程中,有部分功能配置驗證沒有達到預期,更豐富的功能支援在haproxy ingress企業版中支援,社群版能支援藍綠髮布和WAF安全掃描功能,詳情可以參考社群文件haproxy藍綠髮布和WAF安全支援。
haproxy ingress控制器目前在社群活躍度一般,相比於nginx,traefik,istio還有一定的差距,實際環境中不建議使用社群版的haproxy ingress。
參考文件
官方安裝文件:https://haproxy-ingress.github.io/docs/getting-started/
haproxy ingress官方配置:https://www.haproxy.com/documentation/hapee/1-7r2/traffic-management/k8s-image-controller/
部落格參考文件:https://cloud.tencent.com/developer/article/1564819
附錄配置檔案詳解
#RBAC認證賬號,和角色關聯
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: ingress-controller
---
# 叢集角色,訪問資源物件和具體訪問許可權
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
#角色定義
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
#叢集角色繫結ServiceAccount和ClusterRole關聯
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: ingress-controller
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
#角色繫結
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: ingress-controller
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
#後端應用的service定義
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: ingress-controller
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
#haproxy ingress配置,實現自定義配置功能
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: ingress-controller
---
#haproxy ingress核心的DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: ingress-controller
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
hostNetwork: true #網路模式為hostNetwork,即使用宿主機的網路
nodeSelector: #節點選擇器,將排程至包含特定標籤的節點
role: ingress-controller
serviceAccountName: ingress-controller #實現RBAC認證授權
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --sort-backends
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace