Kubernetes in Docker (KinD) 是一款使用 Docker 容器作為「節點」的工具,適合在本地環境中測試、執行 Kubernetes 叢集。 通常在 Kubernetes 的 Load Balancer 設定中,公有雲可以直接使用內建功能,但在私有雲中則需依賴 MetalLB 或 Cilium L2 Announcements 等工具來分配 IP 位址。
KinD 無法模擬所有雲端服務商的功能,特別是 Load Balancer,因此使用體驗受限。 為此,KinD 建議使用者透過 MetalLB 自行配置,但設定過程較為複雜。
Cloud Provider Kind#
KubeCon + CloudNativeCon Europe 2024 終於有專屬工具可以處理 KinD 的 Load Balancer —— Cloud Provider Kind。 同時,也可以來測試雲端供應商的功能,提供一種獨立且低成本的方式解決。
詳情可以看他們的影片介紹:
Mac 和 Windows 限制#
目前最新版本 (v0.4.0) 完全支援於 Linux 系統,但對於 Mac 和 Windows 系統則需要開啟最高權限:
- Mac 執行時必須使用
sudo - Windows 使用 Shell 執行時必須要
以系統管理員身分執行(Run as administrator)
實作:KinD + Cloud Provider Kind#
接下來把 KinD 賦予 Service 的 Load Balancer 功能吧!
環境資訊#
筆者使用 MacBook Pro M2 Max,環境會用虛擬機安裝 Ubuntu 24.04 版本。
- MacBook Pro 16" 2023 (M2 Max)
- Host OS: macOS Sequoia 15.0.1
- Parallels Desktop Pro 20.1.1 (55740)
- Guest OS: Ubuntu 24.04.1 (Noble Numbat) ARM64
- Docker: 27.3.1
- Go: 1.23
- KinD: v0.24.0
- Kubernetes: v1.31.0
本文節省流程,已經先把 Go、Docker、KinD 和 kubectl 命令列都安裝完成。
下載 Cloud Provider KIND#
先下載 Cloud Provider KIND,官方建議透過 go install 作為安裝方式:
go install sigs.k8s.io/cloud-provider-kind@latest
如果是在一般使用者操作,路徑會在 ~/go/bin 底下,可以加入到環境變數中,或者安裝在 /usr/local/bin。
sudo install ~/go/bin/cloud-provider-kind /usr/local/bin
建立 KinD cluster#
kind.yaml 檔案內容如下:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cloud-provider-test
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
這一組 cluster 有 4 個節點,其中 1 個節點是 control plane,其它 3 個節點是 worker。
接下來啟動 cluster,啟動方式如下:
kind create cluster --config=kind.yaml
按下 enter 鍵後,會跳出這些訊息:
Creating cluster "cloud-provider-test" ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cloud-provider-test"
You can now use your cluster with:
kubectl cluster-info --context kind-cloud-provider-test
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
接下來就使用 kubectl 看一下 node 是否正常:
kubectl get node
輸出結果:
NAME STATUS ROLES AGE VERSION
cloud-provider-test-control-plane Ready control-plane 36s v1.31.0
cloud-provider-test-worker Ready <none> 25s v1.31.0
cloud-provider-test-worker2 Ready <none> 25s v1.31.0
cloud-provider-test-worker3 Ready <none> 25s v1.31.0
確定正常運作後,就可以來使用 Cloud Provider KIND。
啟動 Cloud Provider KIND#
下一步就是啟動 Cloud Provider KIND,我們另外開一個 Terminal 視窗,啟動方式很簡單:
cloud-provider-kind
然後就可以掛在旁邊了,測試結束就可以直接把它 control + c 中斷。
測試 Load Balancer#
這裡筆者會分兩段做測試,第一段會使用 Agnhost 測試,這也是 Cloud Provider KIND 給的範例。 第二段會使用 Istio Ingress 搭配 gateway 和 virtual service 測試 reverse proxy。
1. Agnhost#
接下來我們用 Cloud Provider KIND 給的 examples/loadbalancer_etp_cluster.yaml 建立 service 和 deployment,內容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: policy-cluster
labels:
app: MyClusterApp
spec:
replicas: 1
selector:
matchLabels:
app: MyClusterApp
template:
metadata:
labels:
app: MyClusterApp
spec:
containers:
- name: agnhost
image: registry.k8s.io/e2e-test-images/agnhost:2.40
args:
- netexec
- --http-port=8080
- --udp-port=8080
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: lb-service-cluster
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: MyClusterApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
確認沒問題後,就 apply 進 cluster:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-kind/refs/tags/v0.4.0/examples/loadbalancer_etp_cluster.yaml
建立完成後,會跳出這些訊息:
deployment.apps/policy-cluster created
service/lb-service-cluster created
也可以看一下 kubectl get service 結果:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m20s
lb-service-cluster LoadBalancer 10.96.172.16 172.18.0.8 80:32709/TCP 25s
本機端 Service 的 External IP 以往只能看到 <pending>,現在就可以看到有 IP。
接下來就可以用 curl 查看 External IP 是不是有效果:
curl http://172.18.0.8/hostname
輸出結果如下:
policy-cluster-85cd85b758-bnm75
實驗完成後,記得要清理資源:
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-kind/refs/tags/v0.4.0/examples/loadbalancer_etp_cluster.yaml
筆者展示多個 Load Balancer 功能,這裡不刪除 Agnhost 直接繼續下一個 Istio Ingress 實驗。
2. Istio Ingress#
這次 Istio Ingress 筆者用 istioctl 做安裝,
istioctl install --set profile=default -y
|\
| \
| \
| \
/|| \
/ || \
/ || \
/ || \
/ || \
/ || \
/______||__________\
____________________
\__ _____/
\_____/
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Ingress gateways installed 🛬
✔ Installation complete
Made this installation the default for cluster-wide operations.
Istio 預設會安裝在名為 istio-system 的 namespace 底下,用 kubectl get svc -n istio-system 看一下結果:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.96.245.27 172.18.0.9 15021:31224/TCP,80:32426/TCP,443:31925/TCP 29s
istiod ClusterIP 10.96.93.192 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 31s
接下來會有三個檔案,分別為 service-gateway.yaml、nginx.yaml 和 apache.yaml,會透過不同的 hostname 導流到不同的 service。
service-gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: service-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.lab.yjerry.tw"
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-vs
spec:
hosts:
- "nginx.lab.yjerry.tw"
gateways:
- istio-system/service-gateway
http:
- route:
- destination:
host: nginx-service
port:
number: 80
apache.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache-service
labels:
app: apache
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: apache-vs
spec:
hosts:
- "apache.lab.yjerry.tw"
gateways:
- istio-system/service-gateway
http:
- route:
- destination:
host: apache-service
port:
number: 80
接下來套用進 cluster:
kubectl apply -f service-gateway.yaml -f nginx.yaml -f apache.yaml
輸出結果如下:
gateway.networking.istio.io/service-gateway created
deployment.apps/nginx-deployment created
service/nginx-service created
virtualservice.networking.istio.io/nginx-vs created
deployment.apps/apache-deployment created
service/apache-service created
virtualservice.networking.istio.io/apache-vs created
現在使用 curl 來測試,可以用 --resolve 參數把 hostname 暫時解析成指定的 IP。
使用方式:curl --resolve <host:port:ip> <URL>
其中 host 就是 hostname,port 就是連接阜,ip 就是要解析的指定 IP。
curl --resolve nginx.lab.yjerry.tw:80:172.18.0.9 http://nginx.lab.yjerry.tw
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
看起來 Nginx 解析成功,下一個換 Apache 的服務,可以使用 -v 看詳細情況:
curl -v --resolve apache.lab.yjerry.tw:80:172.18.0.9 http://apache.lab.yjerry.tw
* Added apache.lab.yjerry.tw:80:172.18.0.9 to DNS cache
* Hostname apache.lab.yjerry.tw was found in DNS cache
* Trying 172.18.0.9:80...
* Connected to apache.lab.yjerry.tw (172.18.0.9) port 80
> GET / HTTP/1.1
> Host: apache.lab.yjerry.tw
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Tue, 29 Oct 2024 17:39:10 GMT
< server: istio-envoy
< last-modified: Mon, 11 Jun 2007 18:53:14 GMT
< etag: "2d-432a5e4a73a80"
< accept-ranges: bytes
< content-length: 45
< content-type: text/html
< x-envoy-upstream-service-time: 0
<
<html><body><h1>It works!</h1></body></html>
* Connection #0 to host apache.lab.yjerry.tw left intact
Apache 解析也沒有問題,這樣就可以在本機上測試 Istio Ingress 的設定了!
最後把 Istio 資源移除:
kubectl delete -f service-gateway.yaml -f nginx.yaml -f apache.yaml
istioctl uninstall --purge
清除測試用的 Kind#
先把 cloud-provider-kind 的視窗按一下 control + c 結束:
I1030 02:00:11.476128 20325 node_controller.go:271] Update 4 nodes status took 113.554874ms.
^CI1030 02:00:21.887163 20325 app.go:69] Exiting: received signal
I1030 02:00:21.887609 20325 controller.go:304] Cleaning resources for cluster cloud-provider-test
I1030 02:00:21.887734 20325 controller.go:253] Shutting down service controller
接下來使用 kind 把測試用的 cluster 移除:
kind delete cluster --name=cloud-provider-test
輸出結果如下:
Deleting cluster "cloud-provider-test" ...
Deleted nodes: ["cloud-provider-test-worker" "cloud-provider-test-control-plane" "cloud-provider-test-worker3" "cloud-provider-test-worker2"]
結語#
終於可以在 KinD 裡面好好測試 Load Balancer 了,不得不說,這功能真的很方便,補足了以往無法本機測試 Load Balancer 的問題。 目前在 Linux 系統都可以使用,不過 Mac 我在測試 OrbStack 環境時,從 Host 發出會無法連線,我還不太清楚這是 Cloud Provider Kind 的問題還是 OrbStack 的問題,但我沒有在 Mac 裝 Docker 就沒辦法測試。
不過,在我研究這問題的時候,我也順便看了 Cloud Provider Kind 的實作方式,有機會再來跟各位分享!
另外,我在 Buy Me a Coffee 開設了自己的專屬平台,如果這篇文章對你有幫助,除了按讚、留言以外,歡迎大家在文章下方點擊 Buy Me a Coffee 按鈕,鼓勵我創作更好的文章,我們下一篇文章見!