Kubernetes
一、基础知识
1. 概念和术语
2. Kubernetes特性
3. 集群组件
4. 抽象对象
5. 镜像加速下载
二、安装部署kubeadm
1. 基础环境准备
2. 安装runtime容器(Docker)
3. 安装runtime容器(Contained)
4. Containerd进阶使用
5. 部署kubernets集群
6. 部署calico网络组件
7. 部署NFS文件存储
8. 部署ingress-nginx代理
9. 部署helm包管理工具
10. 部署traefik代理
11. 部署dashboard管理面板(官方)
12. 部署kubesphere管理面板(推荐)
12. 部署metrics监控组件
13. 部署Prometheus监控
14. 部署elk日志收集
15. 部署Harbor私有镜像仓库
16. 部署minIO对象存储
17. 部署jenkins持续集成工具
三、kubectl命令
1. 命令格式
2.node操作常用命令
3. pod常用命令
4.控制器常用命令
5.service常用命令
6.存储常用命令
7.日常命令总结
8. kubectl常用命令
四、资源对象
1. K8S中的资源对象
2. yuml文件
3. Kuberbetes YAML 字段大全
4. 管理Namespace资源
5. 标签与标签选择器
6. Pod资源对象
7. Pod生命周期与探针
8. 资源需求与限制
9. Pod服务质量(优先级)
五、资源控制器
1. Pod控制器
2. ReplicaSet控制器
3. Deployment控制器
4. DaemonSet控制器
5. Job控制器
6. CronJob控制器
7. StatefulSet控制器
8. PDB中断预算
六、Service和Ingress
1. Service资源介绍
2. 服务发现
3. Service(ClusterIP)
4. Service(NodePort)
5. Service(LoadBalancer)
6. Service(ExternalName)
7. 自定义Endpoints
8. HeadlessService
9. Ingress资源
10. nginx-Ingress案例
七、Traefik
1. 知识点梳理
2. 简介
3. 部署与配置
4. 路由(IngressRoute)
5. 中间件(Middleware)
6. 服务(TraefikService)
7. 插件
8. traefikhub
9. 配置发现(Consul)
10. 配置发现(Etcd)
八、存储
1. 配置集合ConfigMap
6. downwardAPI存储卷
3. 临时存储emptyDir
2. 敏感信息Secret
5. 持久存储卷
4. 节点存储hostPath
7. 本地持久化存储localpv
九、rook
1. rook简介
2. ceph
3. rook部署
4. rbd块存储服务
5. cephfs共享文件存储
6. RGW对象存储服务
7. 维护rook存储
十、网络
1. 网络概述
2. 网络类型
3. flannel网络插件
4. 网络策略
5. 网络与策略实例
十一、安全
1. 安全上下文
2. 访问控制
3. 认证
4. 鉴权
5. 准入控制
6. 示例
十二、pod调度
1. 调度器概述
2. label标签调度
3. node亲和调度
4. pod亲和调度
5. 污点和容忍度
6. 固定节点调度
十三、系统扩展
1. 自定义资源类型(CRD)
2. 自定义控制器
十四、资源指标与HPA
1. 资源监控及资源指标
2. 监控组件安装
3. 资源指标及其应用
4. 自动弹性缩放
十五、helm
1. helm基础
2. helm安装
3. helm常用命令
4. HelmCharts
5. 自定义Charts
6. helm导出yaml文件
十六、k8s高可用部署
1. kubeadm高可用部署
2. 离线二进制部署k8s
3. 其他高可用部署方式
十七、日常维护
1. 修改节点pod个数上限
2. 集群证书过期更换
3. 更改证书有效期
4. k8s版本升级
5. 添加work节点
6. master节点启用pod调度
7. 集群以外节点控制k8s集群
8. 删除本地集群
9. 日常错误排查
10. 节点维护状态
11. kustomize多环境管理
12. ETCD节点故障修复
13. 集群hosts记录
14. 利用Velero对K8S集群备份还原与迁移
15. 解决K8s Namespace无法正常删除的问题
16. 删除含指定名称的所有资源
十八、k8s考题
1. 准备工作
2. 故障排除
3. 工作负载和调度
4. 服务和网络
5. 存储
6. 集群架构、安装和配置
本文档使用 MrDoc 发布
-
+
home page
1. 准备工作
# 考试报名资料 报名文档:[https://training.linuxfoundation.cn/news/308](https://training.linuxfoundation.cn/news/308)<br />考纲:[https://training.linuxfoundation.cn/certificates/1](https://training.linuxfoundation.cn/certificates/1)<br />考试常见问题:[https://docs.linuxfoundation.org/tc-docs/certification/lf-handbook2/taking-the-exam](https://docs.linuxfoundation.org/tc-docs/certification/lf-handbook2/taking-the-exam) # 考试小技巧 ## 查看k8s资源列表 ```yaml kubectl api-resources --namespaced=true ##查看哪些资源在命令空间 kubectl api-resources --namespaced=false ##查看哪些资源不在命令空间 ``` ## 查看kubectl示例命令 ```yaml [root@k8s-master ~]# kubectl create clusterrole -h Create a cluster role. Examples: # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified kubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions # Create a cluster role named "foo" with SubResource specified kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" Options: …… --verb=[]: Verb that applies to the resources contained in the rule Usage: kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none] [options] Use "kubectl options" for a list of global command-line options (applies to all commands). ``` ## 快速生成yaml文件 ```bash # 不创建资源,将yaml导出到文件 [root@k8s-master ~]# kubectl create deployment nginx --image=nginx -o yaml --dry-run > nginx.yaml W0516 11:15:34.619473 469162 helpers.go:639] --dry-run is deprecated and can be replaced with --dry-run=client. [root@k8s-master ~]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: nginx resources: {} status: {} # get已有资源,导出yaml格式文件 [root@k8s-master ~]# kubectl get svc myapp -o yaml > myapp.svc [root@k8s-master ~]# cat myapp.svc apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"ports":[{"port":80,"targetPort":80}],"selector":{"app":"myapp"},"type":"ClusterIP"}} creationTimestamp: "2023-05-11T11:18:49Z" name: myapp namespace: default resourceVersion: "95775" uid: 94d82f07-fad5-4713-8e26-67a22baa5816 spec: clusterIP: 10.106.161.89 clusterIPs: - 10.106.161.89 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myapp sessionAffinity: None type: ClusterIP status: loadBalancer: {} ``` ## 资源清单编辑 考试过程中涉及到修改pod配置的情况,记得导出yaml文件后备份,以免修改错误无法恢复。<br />从官网复制yaml文件到vim编辑器可能存在过多空白行的情况,可以执行`:g/^$/d`删除空白行 # 重点文档路径 考试过程中允许查阅官方文档,并复制相关yaml配置,可以提前记住考点对应的文档路径以及相关配置的大致位置,避免考试过程中搜索查找文档浪费时间。 ## RBAC 文档路径:参考——>API访问控制——>使用RBAC鉴权<br />链接:[https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/](https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/) ## 升级k8s版本 文档路径:任务——>管理集群——>用kubeadm进行管理——>升级kubeadm集群<br />链接:[https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) ## ETCD备份与恢复 文档路径:任务——>管理集群——>为 Kubernetes 运行 etcd 集群<br />链接:[https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/) ## pod增加sidecar容器 文档路径:概念——>集群管理——>日志架构<br />链接:[https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/](https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/) ## pod指定节点调度 文档路径:任务——>配置 Pods 和容器——>将 Pod 分配给节点<br />链接:[https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/](https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/) ## 网络策略 文档路径:概念——>服务、负载均衡和联网——>网络策略<br />链接:[https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/](https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/) ## SVC暴露服务 文档路径:概念——>服务、负载均衡和联网——>服务(Service)<br />链接:[https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/](https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/) ## Ingress 文档路径:概念——>服务、负载均衡和联网——>Ingress<br />链接:[https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/](https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/) ## pod使用pv 文档路径:任务——>配置 Pods 和容器——>配置 Pod 以使用 PersistentVolume 作为存储<br />链接:[https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/](https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/) # 练习环境准备 ## 部署k8s 部署一个3节点k8s,版本为1.24.X,操作系统为ubuntu(centos也可以),容器运行时使用Containerd。参考文章: - 环境准备:[https://www.cuiliangblog.cn/detail/section/15186285](https://www.cuiliangblog.cn/detail/section/15186285) - 安装containerd:[https://www.cuiliangblog.cn/detail/section/99861101](https://www.cuiliangblog.cn/detail/section/99861101) - 部署kubernetes:[https://www.cuiliangblog.cn/detail/section/15188146](https://www.cuiliangblog.cn/detail/section/15188146) 本环境操作系统为Rocky Linux 8.7 内核版本为4.18.0-425.13.1.el8_7.x86_64,k8s集群版本为1.24.13,containerd版本为1.6.4。集群环境如下: | 主机名 | ip | 角色 | 其他操作 | | --- | --- | --- | --- | | k8s-master | 192.168.10.20 | master节点 | 节点打上NoSchedule的污点 | | k8s-work1 | 192.168.10.21 | work节点 | 单独部署etcd服务,模拟etcd数据备份与恢复 | | k8s-work2 | 192.168.10.22 | work节点 | 停止kubelet服务,并关闭开机自启动 | ## 部署ingress-NGINX 参考文章:[https://www.cuiliangblog.cn/detail/section/15188441](https://www.cuiliangblog.cn/detail/section/15188441) ## 部署共享存储 参考文章:[https://www.cuiliangblog.cn/detail/section/116191364](https://www.cuiliangblog.cn/detail/section/116191364) ## 部署metrics-server 参考文章:[https://www.cuiliangblog.cn/detail/section/15189166](https://www.cuiliangblog.cn/detail/section/15189166) ## 部署etcd服务 在work1节点单独部署一套etcd模拟考试题目的etcd备份恢复操作。 - 下载安装etcd软件 ```bash # 下载软件包 [root@k8s-work1 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.23/etcd-v3.4.23-linux-amd64.tar.gz [root@k8s-work1 ~]# ls etcd-v3.4.23-linux-amd64.tar.gz # 解压到指定目录 [root@k8s-work1 ~]# tar -zxf etcd-v3.4.23-linux-amd64.tar.gz -C /usr/local [root@k8s-work1 ~]# cd /usr/local/etcd-v3.4.23-linux-amd64/ [root@k8s-work1 etcd-v3.4.23-linux-amd64]# ls Documentation README-etcdctl.md README.md READMEv2-etcdctl.md etcd etcdctl # 添加环境变量 [root@k8s-work1 etcd-v3.4.23-linux-amd64]# vim /etc/profile export PATH="$PATH:/usr/local/etcd-v3.4.23-linux-amd64" [root@k8s-work1 etcd-v3.4.23-linux-amd64]# source /etc/profile # 验证 [root@k8s-work1 etcd-v3.4.23-linux-amd64]# etcdctl version etcdctl version: 3.4.23 API version: 3.4 [root@k8s-work1 etcd-v3.4.23-linux-amd64]# etcd --version etcd Version: 3.4.23 Git SHA: c8b7831 Go Version: go1.17.13 Go OS/Arch: linux/amd64 ``` - 配置证书 ```bash # 下载安装cfssl [root@k8s-work1 ~]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64 [root@k8s-work1 ~]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64 [root@k8s-work1 ~]# mv cfssl_1.6.3_linux_amd64 /usr/bin/cfssl [root@k8s-work1 ~]# mv cfssljson_1.6.3_linux_amd64 /usr/bin/cfssljson [root@k8s-work1 ~]# chmod +x /usr/bin/{cfssl,cfssljson} [root@k8s-work1 ~]# cfssl version Version: 1.6.3 Runtime: go1.18 [root@k8s-work1 ~]# cd /etc/etcd/pki # 创建 CA 证书 [root@k8s-work1 pki]# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "43800h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF [root@k8s-work1 pki]# cat > ca-csr.json <<EOF { "CN": "Etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "Etcd", "OU": "CA" } ] } EOF [root@k8s-work1 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@k8s-work1 pki]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem # 生成服务器端证书 [root@k8s-work1 pki]# cat > server-csr.json <<EOF { "CN": "server", "hosts": [ "127.0.0.1", "192.168.10.21" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF [root@k8s-work1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server-csr.json | cfssljson -bare server [root@k8s-work1 pki]# ls server* server.csr server-csr.json server-key.pem server.pem # 生成客户端证书 [root@k8s-work1 pki]# cat > client-csr.json <<EOF { "CN": "client", "hosts": [ "" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF [root@k8s-work1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client [root@k8s-work1 pki]# ls client* client.csr client-csr.json client-key.pem client.pem # 更新系统证书库 [root@k8s-work1 pki]# dnf install ca-certificates -y [root@k8s-work1 pki]# update-ca-trust ``` - 添加配置文件 ```bash # 创建数据目录与配置文件目录 [root@k8s-work1 ~]# mkdir -p /etc/etcd [root@k8s-work1 ~]# mkdir -p /var/lib/etcd # systemd 服务配置文件 [root@k8s-work1 ~]# cat > /usr/lib/systemd/system/etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/etc/etcd/etcd.conf ExecStart=/usr/local/etcd-v3.4.23-linux-amd64/etcd --config-file=/etc/etcd/etcd.conf Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF # 创建etcd配置文件 [root@k8s-work1 ~]# cat > /etc/etcd/etcd.conf <<EOF # 节点名称 name: "etcd1" # 数据存储目录 data-dir: "/var/lib/etcd" # 对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点 advertise-client-urls: "https://192.168.10.21:2379" # 监听客户端请求的地址列表 listen-client-urls: "https://192.168.10.21:2379,https://127.0.0.1:2379" # 监听URL,用于节点之间通信监听地址 listen-peer-urls: "https://192.168.10.21:2380" # 服务端之间通讯使用的地址列表,该节点同伴监听地址,这个值会告诉集群中其他节点 initial-advertise-peer-urls: "https://192.168.10.21:2380" # etcd启动时,etcd集群的节点地址列表 initial-cluster: "etcd1=https://192.168.10.21:2380" # etcd集群的初始集群令牌 initial-cluster-token: 'etcd-cluster' # etcd集群初始化的状态,new代表新建集群,existing表示加入现有集群 initial-cluster-state: 'new' # 日志配置 logger: zap # 客户端加密 client-transport-security: cert-file: "/etc/etcd/pki/server.pem" key-file: "/etc/etcd/pki/server-key.pem" client-cert-auth: True trusted-ca-file: "/etc/etcd/pki/ca.pem" # 节点加密 peer-transport-security: cert-file: "/etc/etcd/pki/server.pem" key-file: "/etc/etcd/pki/server-key.pem" client-cert-auth: True trusted-ca-file: "/etc/etcd/pki/ca.pem" EOF # 启动etcd服务并添加开机自启动 [root@k8s-work1 ~]# systemctl daemon-reload [root@k8s-work1 ~]# systemctl start etcd [root@k8s-work1 ~]# systemctl enable etcd ``` - 访问验证 ```bash [root@k8s-work1 ~]# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.10.21:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/client.pem --key=/etc/etcd/pki/client-key.pem endpoint status --cluster -w table +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.10.21:2379 | f8faaf98bca480d2 | 3.4.23 | 20 kB | true | false | 2 | 4 | 4 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-work1 ~]# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.10.21:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/client.pem --key=/etc/etcd/pki/client-key.pem put /foo/bar "hello world" OK [root@k8s-work1 ~]# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.10.21:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/client.pem --key=/etc/etcd/pki/client-key.pem get /foo/bar /foo/bar hello world ``` - 创建备份快照文件 ```bash [root@k8s-work1 ~]# mkdir -p /data/backup/ [root@k8s-work1 ~]# ETCDCTL_API=3 etcdctl snapshot save /data/backup/etcd-snapshot-previous.db --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/client.pem --key=/etc/etcd/pki/client-key.pem {"level":"info","ts":1684481727.6203024,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/data/backup/etcd-snapshot.db.part"} {"level":"info","ts":"2023-05-19T15:35:27.677+0800","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1684481727.678215,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"} {"level":"info","ts":"2023-05-19T15:35:27.714+0800","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1684481727.7291808,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"20 kB","took":0.106096152} {"level":"info","ts":1684481727.7293956,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/data/backup/etcd-snapshot.db"} Snapshot saved at ls ``` ## 其他资源创建 ```bash [root@k8s-master ~]# kubectl create ns app-team1 namespace/app-team1 created [root@k8s-master ~]# kubectl create ns my-app namespace/my-app created [root@k8s-master ~]# kubectl create deployment front-end --image=nginx deployment.apps/front-end created [root@k8s-master ~]# kubectl create deployment loadbalancer --image=nginx deployment.apps/loadbalancer created [root@k8s-master ~]# kubectl create deployment cpu-utilizer --image=docker.elastic.co/logstash/logstash:8.7.1 --replicas=3 deployment.apps/cpu-utilizer created [root@k8s-master ~]# cat pod.yaml apiVersion: v1 kind: Pod metadata: name: legacy-app spec: containers: - name: count image: busybox args: - /bin/sh - -c - > i=0; while true; do echo "$i: $(date)" >> /var/log/legacy-app.log; sleep 1; done volumes: - name: varlog emptyDir: {} [root@k8s-master ~]# kubectl apply -f pod.yaml pod/legacy-app created [root@k8s-master ~]# cat > filebeat.yaml <<EOF apiVersion: v1 kind: Pod metadata: name: filebeat spec: containers: - name: filebeat image: docker.elastic.co/beats/filebeat:8.7.1 EOF [root@k8s-master ~]# kubectl apply -f filebeat.yaml pod/filebeat created ``` ## work1节点打标签 有一道题目是pod调度到直接标签的节点,需要提起在work1节点打标签 ```bash [root@k8s-master ~]# kubectl label nodes k8s-work1 disktype=ssd node/k8s-work1 labeled [root@k8s-master ~]# kubectl get nodes --show-labels | grep ssd k8s-work1 Ready <none> 10d v1.24.13 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-work1,kubernetes.io/os=linux ``` ## 停止work2服务 有一道处理节点NotReady的题目,就是kubelet服务未启动导致。 ```bash [root@k8s-work2 ~]# systemctl stop kubelet [root@k8s-work2 ~]# systemctl disable kubelet Removed /etc/systemd/system/multi-user.target.wants/kubelet.service. ``` ## master节点打上污点 有一道题目是统计有多少个worker nodes已经准备就绪(不包括被打上Taint NoSchedule的节点) ```bash [root@k8s-master ~]# kubectl taint nodes k8s-master key=value:NoSchedule node/k8s-master tainted ```
Nathan
June 22, 2024, 12:48 p.m.
转发文档
Collection documents
Last
Next
手机扫码
Copy link
手机扫一扫转发分享
Copy link
Markdown文件
PDF文件
Docx文件
share
link
type
password
Update password