K8s 學習筆記 - 工具篇


延續 K8s 學習筆記 - kubeadm 手動安裝 的整理,本文整理一些 K8s 常用的工具,主要參考自 K8s 官方的 Addons


安裝 Add-on

  • K8s Dashboard
  • rook
  • Weave Scope

部署 Dashboard

部署 kubernetes-dashboard,參考 官方 說明

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
serviceaccount/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal unchanged
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard unchanged

~# kubectl get pods -n kube-system | grep dashboard
... 略 ...
kubernetes-dashboard-5dd89b9875-5mx9n 1/1 Running 0 16s
... 略 ...

## 啟動 proxy
~# kubectl proxy

## 開啟網頁瀏覽以下位址
# http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
# 透過 RBAC 取得 Token 登入

使用 RBAC (Role-Base Access Control)

參考:https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

admin.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

# admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

然後執行 kubectl apply -f admin.yaml,好了之後取得 admin-user token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
~$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-txkst
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: fb1c2dcb-5ebf-11e9-8d1e-92cde7b04430

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5

複製 token 回到 dashboard 選擇 token 方式登入。

部署 Persistent Volume(PV)和 Persistent Volume Claim (PVC)

主要針對 K8s 的 PV / PVC 套件安裝,安裝之後,才能夠跑 StatefulSet

這邊安裝的是 rook,參考 官方文件

1. 建立 namespace、CRD、RBAC 相關資源

第一部會建立 namespace 給 rook 使用,然後建立 RBAC 相關資源,包含以下:

  • Namespace: rook-ceph
  • CRD (Custom Resource Definition):
    • cephclusters
    • cephfilesystems
    • cephnfses
    • cephobjectstores
    • cephobjectstoreusers
    • cephblockpools
    • volumes
  • RBAC 相關:
    • ClusterRole: rook-ceph-cluster-mgmt, rook-ceph-global, rook-ceph-mgr-cluster, rook-ceph-mgr-system
    • Role: rook-ceph-system, rook-ceph-osd, rook-ceph-mgr
    • ServiceAccount: rook-ceph-system, rook-ceph-osd, rook-ceph-mgr
    • RoleBinding: rook-ceph-system, rook-ceph-cluster-mgmt, rook-ceph-osd, rook-ceph-mgr, rook-ceph-mgr-system
    • ClusterRoleBinding: rook-ceph-global, rook-ceph-mgr-cluster
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
~# kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml
namespace/rook-ceph created

customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created

clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created

2. 建立 deployment:rook

建立相關的 pods

1
2
3
4
5
6
7
8
9
10
~# kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

## 第一次會跑一下子, rook/ceph 大小約 700MB
~# kubectl get po -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-agent-dcgb8 1/1 Running 0 2m
rook-ceph-agent-jcpmd 1/1 Running 0 2m
rook-ceph-operator-65b65fbd66-9m58m 1/1 Running 0 3m35s
rook-discover-88mf5 1/1 Running 0 2m
rook-discover-wggnl 1/1 Running 0 2m

這次的部署包含 agent, operator, discover, 等待所有的 pod 都是 running 狀態後,就可以部署 Ceph Cluster.

3. 部署 Ceph Cluster

執行以下部署 Ceph Cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
~# kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
cephcluster.ceph.rook.io/rook-ceph created

## 等一下,會出現 rook-ceph-mon-a
~# kubectl get po -n rook-ceph
rook-ceph-agent-dcgb8 1/1 Running 0 127m
rook-ceph-agent-jcpmd 1/1 Running 2 127m
rook-ceph-mgr-a-77d8645896-8blh4 1/1 Running 0 118m
rook-ceph-mon-a-9cbbbf7b-2w6zk 1/1 Running 1 123m
rook-ceph-mon-b-775ff945c5-77vtv 1/1 Running 0 122m
rook-ceph-mon-c-59695fb97b-drnww 1/1 Running 1 122m
rook-ceph-operator-65b65fbd66-9m58m 1/1 Running 1 129m
rook-discover-88mf5 1/1 Running 0 127m
rook-discover-wggnl 1/1 Running 1 127m

完成部署後,會出現以下:

  • Ceph Monitor: 三個
  • Ceph Manager: 一個

Deployment

1
2
3
4
5
6
7
~$ kubectl get deploy -n rook-ceph
NAME READY UP-TO-DATE AVAILABLE AGE
rook-ceph-mgr-a 1/1 1 1 119m
rook-ceph-mon-a 1/1 1 1 124m
rook-ceph-mon-b 1/1 1 1 123m
rook-ceph-mon-c 1/1 1 1 123m
rook-ceph-operator 1/1 1 1 130m

安裝 Weave Scope (監控工具)

Weave Scope 是個探索 K8s 資源關係與狀態的圖形介面,CNI 需要用 WeaveNet 才能使用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 安裝方法一
kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=1.14"

## 開啟 port forward
kubectl port-forward -n weave "$(kubectl get -n weave pod --selector=weave-scope-component=app -o jsonpath='{.items..metadata.name}')" 4040

# 從 Source Code 安裝
git clone https://github.com/weaveworks/scope
cd scope
kubectl apply -f examples/k8s

## 瀏覽
kubectl port-forward svc/weave-scope-app -n weave 8040:80
# http://127.0.0.1:8040

## 刪除
# kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=1.14"

如下圖:


延伸閱讀

站內文章:K8s 相關

VPC - 網路規劃




Comments