小丑的个人博客

记录精彩的学习过程

  menu
17 文章
0 浏览
ღゝ◡╹)ノ❤️

kubeadm安装升级

一:简介:

1.1:CNCF最新景观图:

https://landscape.cncf.io/

1.2:云原生生态系统:

http://dockone.io/article/3006

1.3:CNCF元原生主要框架简介:

https://www.kubernetes.org.cn/5482.html

1.4:K8s核心优势:

基于yaml文件实现容器的自动创建、删除
更快速实现业务的弹性横向扩容
动态发现新扩容的容器并对自自动用户提供访问
更简单、更快速的实现业务代码升级和回滚

image-20220116154623598

1.5:k8s组件介绍:

https://kubernetes.io/zh/ #官方网址

https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/

kube-apiserver:Kubernetes API server 为 api 对象验证并配置数据,包括 pods、 services、 replicationcontrollers和其它 api 对象,API Server 提供 REST 操作,并为集群的共享状态提供前端访问入口,kubernetes中的所有其他组件都通过该前端进行交互。

https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-scheduler/

kube-scheduler是Kubernetes的pod调度器,负责将Pods指派到合法的节点上,kube-scheduler调度器基于约束和可用资源为调度队列列中每个Pod确定其可合法放置的节点,kube-scheduler一个拥有丰富策略、能够感知拓拓扑变化、支持特定负载的功能组件,kube-scheduler需要考虑独立的和集体的资源需求、服务质量需求、硬件/软件/策略限制、亲和与反亲和规范等需求。

https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-controller-manager/

kube-controller-manager:Controller Manager作为集群内部的管理控制中心心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处于预期的工作状态。

https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-proxy/

kube-proxy:Kubernetes 网络代理运行行在 node 上,它反映了 node 上 Kubernetes API 中定义的服务,并可以通过一组后端进行行简单的 TCP、UDP 和 SCTP 流转发或者在一组后端进行行循环 TCP、UDP 和SCTP 转发,用户必须使用 apiserver API 创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行行连接转发来实现Kubernetes服务访问。

https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet/

kubelet:是运行行在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:

向master汇报node节点的状态信息

接受指令并在Pod中创建 docker容器

准备Pod所需的数据卷

返回pod的运行行状态

在node节点执行行容器健康检查

https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/

etcd:

etcd 是CoreOS公司开发目前是Kubernetes默认使用的key-value数据存储系统,用于保存所有集群数
据,支持分布式集群功能,生生产环境使用时需要为etcd数据提供定期备份机制。

https://kubernetes.io/zh/docs/concepts/overview/components/ #组件预览

#核心组件:
	apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制   
	controller manager:负责维护集群的状态,比如故障检测、自自动扩展、滚动更新等   
	scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
	kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;   
	Container runtime:负责镜像管理以及Pod和容器的真正运行行(CRI);
	kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡;
	etcd:保存了整个集群的状态
#可选组件:
    kube-dns/coredns:负责为整个集群提供DNS服务
    	# 警告: 从 v1.18 开始,在 kubeadm 中使用 kube-dns 的支持已被废弃,并已在 v1.21 版本中删除。
    Ingress Controller:为服务提供外网入
    Heapster:提供资源监控 # 1.11以后已经不支持了,后期用的promethues
    Dashboard:提供GUI
    Federation:提供跨可用区的集群
    Fluentd-elasticsearch:提供集群日志采集、存储与查询

二:k8s安装部署:

安装规划:

image-20220116191355704

2.1:安装方式:

2.1.1:部署工具:

使用批量部署工具如(ansible/ saltstack)、手动二进制、kubeadm、apt-get/yum等方式安装,以守护进程的方式启动在宿主机上,类似于是Nginx一样使用service脚本启动。

2.1.2:kubeadm:

https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/release.md

#kubernetes开发周期

使用k8s官方提供的部署工具kubeadm自自动安装,需要在master和node节点上安装docker等组件,然后初始化,把管理端的控制服务和node上的服务都以pod的方式运行行。

2.1.3:安装注意事项:

注意:

禁用swap

关闭selinux

关闭iptables,

优化内核参数及资源限制参数

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时会被宿主机iptables的FORWARD规则匹配

2.2:部署过程:

组件规划及版本选择:

https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/ #CRI运行时选择

https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/ #CNI选择

2.2.1:具体步骤:

1、基础环境准备
2、部署harbor及haproxy高可用反向代理理,实现控制节点的API反问⼊入口高可用
3、在所有master节点安装指定版本的kubeadm 、kubelet、kubectl、docker
4、在所有node节点安装指定版本的kubeadm 、kubelet、docker,在node节点kubectl为可选安装,看是否需要在node执行kubectl命令进行集群管理理及pod管理理等操作。
5、master节点运行kubeadm init初始化命令
6、验证master节点状态
7、在node节点使用kubeadm命令将自⼰己加⼊入k8s master(需要使用master生成token认证)
8、验证node节点状态
9、创建pod并测试⽹网络通信
10、部署web服务Dashboard
11、k8s集群升级案例例

目前官方最新版本为1.21.0,因为涉及到后续的版本升级案例例,所以1.21.x的版本无法演示后续的版本升级:

image-20220116192055276

因此1.20.x版本中的次稳定版本1.20.5或1.20.5之前的1.20.x的版本,后续再升级至1.20.x的最新稳定版1.20.6:

image-20220116192126435

2.2.2:基础环境准备:

服务器环境:

最小化安装基础系统,如果使用centos系统,则关闭防⽕火墙 selinux和swap,更新软件源、时间同步、安装常用命令,重启后验证基础配置,centos 推荐使用centos 7.5及以上的系统,ubuntu推荐18.04及以上稳定版。

主机名IP地址
k8s-master1kubeadm-master1.example.local172.31.3.201
k8s-master2kubeadm-master2.example.local172.31.3.202
k8s-master3kubeadm-master3.example.local172.31.3.203
ha1ha1.example.local172.31.3.204
ha2ha2.example.local172.31.3.205
harborharbor.example.local172.31.3.206
node1node1.example.local172.31.3.207
node2node2.example.local172.31.3.208
node3node3.example.local172.31.3.209

2.3:高可用反向代理:

基于keepalived及HAProxy实现高可用反向代理环境,为k8s apiserver提供高可用反向代理。

2.3.1:keepalived安装及配置:

安装及配置keepalived,并测试VIP的高可用

节点 1 安装及配置keepalived:

root@k8s-ha1:~# apt install keepalived
root@k8s-ha1:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@k8s-ha1:~# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.31.3.188 dev eth0 label eth0:1
    }
}

root@k8s-ha1:~# systemctl restart keepalived
root@k8s-ha1:~# systemctl enable keepalived

节点 2 安装及配置keepalived:

root@k8s-ha2:~# apt install keepalived
root@k8s-ha2:~# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.31.3.188 dev eth0 label eth0:1
    }
}

2.3.2:haproxy安装及配置:

节点 1 安装及配置haproxy:

root@k8s-ha1:~# apt install haproxy
root@k8s-ha1:~# vim /etc/haproxy/haproxy.cfg
listen stats
	mode http
	bind 0.0.0.0:9999
	stats enable
	log global
	stats uri /haproxy-status
#	stats auth haadmin:123456
listen k8s-apiserver-6443
    bind 172.31.3.188:6443
    mode tcp
    balance source
    server 172.31.3.201 172.31.3.201:6443 check inter 3s fall 3 rise 5
    server 172.31.3.202 172.31.3.202:6443 check inter 3s fall 3 rise 5
    server 172.31.3.203 172.31.3.203:6443 check inter 3s fall 3 rise 5


root@k8s-ha1:~# systemctl enable haproxy
root@k8s-ha1:~# systemctl restart haproxy

节点 2 安装及配置haproxy:

root@k8s-ha2:~# apt install haproxy
root@k8s-ha2:~# vim /etc/haproxy/haproxy.cfg
listen stats
	mode http
	bind 0.0.0.0:9999
	stats enable
	log global
	stats uri /haproxy-status
#	stats auth haadmin:123456
listen k8s-apiserver-6443
    bind 172.31.3.188:6443
    mode tcp
    balance source
    server 172.31.3.201 172.31.3.201:6443 check inter 3s fall 3 rise 5
    server 172.31.3.202 172.31.3.202:6443 check inter 3s fall 3 rise 5
    server 172.31.3.203 172.31.3.203:6443 check inter 3s fall 3 rise 5

root@k8s-ha1:~# systemctl enable haproxy
root@k8s-ha1:~# systemctl restart haproxy

2.4:harbor

略~~~

2.5:安装kubeadm等组件:

在master和node节点安装kubeadm 、kubelet、kubectl、docker等组件,负载均衡服务器需要安装。

2.5.1:版本选择:

在每个master节点和node节点安装经过验证的docker

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v11711 #安装经过验证的docker版本

Update the latest validated version of Docker to 19.03 ([#84476]

image-20220116193350061

2.5.2:安装docker:

#安装必要的一些系统⼯工具
# sudo apt-get update
# apt  -y install apt-transport-https ca-certificates curl software-properties-common

安装GPG证书
# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
写⼊入软件源信息

# sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
更新软件源
# apt-get -y update 

查看可安装的Docker版本
# apt-cache madison docker-ce  docker-ce-cli

安装并启动docker 19.03.8:
# apt install -y docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal
# systemctl  start docker && systemctl  enable docker

验证docker版本:
# docker version

2.5.3:所有节点安装kubelet kubeadm kubectl:

所有节点配置阿里云仓库地址并安装相关组件,node节点可选安装kubectl

配置阿里云镜像的kubernetes源(用于安装kubelet kubeadm kubectl命令)

https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11Otippu

image-20220116193807728

使用阿里的kubernetes镜像源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF



使用清华的kubernetes镜像源
# apt-get update && apt-get install -y apt-transport-https
# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
# echo "deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list

开始安装kubeadm:
# apt-get update
# apt-cache  madison kubeadm 


# apt-get install kubelet=1.20.14-00 kubeadm=1.20.14-00 kubectl=1.20.14-00  # master需要安装kubectl
# apt-get install kubelet=1.20.14-00 kubeadm=1.20.14-00 # node节点不需要安装kubectl

# kubeadm  version #验证版本
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"

2.5.4:验证master节点kubelet服务:

目前启动kbelet以下报错:

image-20220117224141114

2.6:master节点运行行kubeadm init初始化命令:

在三台master中任意一台master 进行行集群初始化,而且集群初始化只需要初始化一次。

2.6.1:kubeadm命令使用:

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/ #命令选项及帮助

# kubeadm --help
Available Commands:
	alpha #kubeadm处于测试阶段的命令
	completion #bash命令补全,需要安装bash-completion
		# mkdir /data/scripts -p
		# kubeadm completion bash > /data/scripts/kubeadm_completion.sh
		# source /data/scripts/kubeadm_completion.sh
		# vim /etc/profile
			source /data/scripts/kubeadm_completion.sh
	config #管理kubeadm集群的配置,该配置保留在集群的ConfigMap中
		#kubeadm config print init-defaults
	help Help about any command
	init #初始化一个Kubernetes控制平面
	join #将节点加入到已经存在的k8s master
	reset 还原使用kubeadm init或者kubeadm join对系统产生的环境变化

	token #管理token
	upgrade #升级k8s版本
	version #查看版本信息

2.6.2:kubeadm init命令简介:

常用参数前面加了两个## 
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/ #命令使用
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/  #集群初始化:


root@docker-node1:~# kubeadm  init --help
## --apiserver-advertise-address string  #K8S API Server将要监听的监听的本机IP
## --apiserver-bind-port int32    #API Server绑定的端口,默认为6443


--apiserver-cert-extra-sans stringSlice #可选的证书额外信息,用于指定API Server的服务器器证书。可以是IP地址也可以是DNS名称。

--cert-dir string    #证书的存储路径,缺省路径为 /etc/kubernetes/pki
--certificate-key string  #定义一个用于加密kubeadm-certs Secret中的控制平台证书的密钥 
--config string #kubeadm #配置文件的路径


## --control-plane-endpoint string #为控制平台指定一个稳定的IP地址或DNS名称,即配置一个可以长期使用切是高可用的VIP或者域名,k8s 多master高可用基于此参数实现
--cri-socket string  #要连接的CRI(容器器运行时接口,Container Runtime Interface, 简称CRI)套接字的路径,如果为空,则kubeadm将尝试自动检测此值,"仅当安装了多个CRI或具有非标准CRI插槽时,才使用此选项"


--dry-run  #不要应用任何更改,只是输出将要执行的操作,其实就是测试运行。
--experimental-kustomize string #用于存储kustomize为静态pod清单所提供的补丁的路径。 
--feature-gates string #一组用来描述各种功能特性的键值(key=value)对,选项是:IPv6DualStack=true|false (ALPHA - default=false)


## --ignore-preflight-errors strings #可以忽略检查过程 中出现的错误信息,比如忽略swap,如果为all就忽略所有

## --image-repository string #设置一个镜像仓库,默认为k8s.gcr.io 
## --kubernetes-version string  #指定安装k8s版本,默认为stable-1(上一个稳定版本)
--node-name string #指定node节点名称

## --pod-network-cidr #设置pod ip地址范围
## --service-cidr #设置service⽹网络地址范围 (default "10.96.0.0/12")
## --service-dns-domain string #设置k8s内部域名,默认为cluster.local,会有相应的DNS服务(kube-dns/coredns)解析生成的域名记录。

--skip-certificate-key-print  #不打印用于加密的key信息
--skip-phases strings #要跳过哪些阶段
--skip-token-print  #跳过打印token信息
--token  #指定token
--token-ttl #指定token过期时间,默认为24小时,0为永不过期
--upload-certs  #更新证书

#全局可选项:
--add-dir-header #如果为true,在日志头部添加日志目录
--log-file string #如果不为空,将使用此日志文件
--log-file-max-size uint #设置日志文件的最大大小,单位为兆,默认为1800兆,0为没有限制 
--rootfs #宿主机的根路径,也就是绝对路径
--skip-headers     #如果为true,在log日志里面不显示标题前缀
--skip-log-headers  #如果为true,在log日志里不显示标题

2.6.3:验证当前kubeadm版本:

root@k8s-master1:~# kubeadm  version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

2.6.4:准备镜像:

root@k8s-master1:~# kubeadm config images list --kubernetes-version v1.20.5 
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

2.6.5:master节点下载镜像:

建议提前在master节点下载镜像以减少安装等待时间,但是镜像默认使用Google的镜像仓库,所以国内无法直接下载,但是可以通过阿里云的镜像仓库把镜像先提前下载下来,可以避免后期因镜像下载异常而导致k8s部署异常。

root@k8s-master1:~# cat images-download.sh 
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

2.6.6:下载镜像:

bash images-download.sh

image-20220118214543260

2.6.7:验证当前镜像:

root@k8s-master1:~# docker images #验证当前镜像
root@kubeadm-master1:~# docker images
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.5             5384b1650507        10 months ago       118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.5             8d13f1db8bfb        10 months ago       47.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.5             d7e24aeb3b10        10 months ago       122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.5             6f0c3da8c99e        10 months ago       116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0            0369cf4303ff        16 months ago       253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        19 months ago       45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        23 months ago       683kB


2.7:单节点初始化:

如果是测试环境、开发环境等非生产环境,可以使用单master节点,生产环境要使用多master节点,以保证k8s的高可用。

image-20220116200315592

kubeadm  init  --apiserver-advertise-address=172.31.3.201 --apiserver-bind-port=6443  --kubernetes-version=v1.20.5  --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=12345.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

2.7.1:单节点初始化过程:

image-20220118215415060

2.7.2:单节点初始化结果:

image-20220118215548097

2.7.3:允许master节点部署pod:

# kubectl taint nodes --all node-role.kubernetes.io/master-

2.7.4:部署网络组件:

https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/ #kubernetes支持的网络扩展

https://quay.io/repository/coreos/flannel?tab=tags #flannel镜像下载地址

https://github.com/flannel-io/flannel #flannel的github 项目地址

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# 切记flannel.yml文件的net网段,要修改为init时的pod地址段
 net-conf.json: |
    { 
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

root@kubeadm-master1:~/m43# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

2.7.5:验证pod状态:

root@kubeadm-master1:~/m43# kubectl get pod -A
NAMESPACE     NAME                                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-54d67798b7-2vfq9                                1/1     Running   0          43m
kube-system   coredns-54d67798b7-cgmv9                                1/1     Running   0          43m
kube-system   etcd-kubeadm-master1.example.local                      1/1     Running   0          43m
kube-system   kube-apiserver-kubeadm-master1.example.local            1/1     Running   0          43m
kube-system   kube-controller-manager-kubeadm-master1.example.local   1/1     Running   0          43m
kube-system   kube-flannel-ds-2sls4                                   1/1     Running   0          25m
kube-system   kube-flannel-ds-zm56w                                   1/1     Running   0          20m
kube-system   kube-proxy-jwll8                                        1/1     Running   0          43m
kube-system   kube-proxy-mvmm6                                        1/1     Running   0          20m
kube-system   kube-scheduler-kubeadm-master1.example.local            1/1     Running   0          43m

2.7.6:运行行pod测试k8s网络环境:

2.8:高可用master初始化:

基于keepalived实现高可用VIP,通过haproxy实现kube-apiserver的反向代理,然后将对kube-apiserver的管理请求转发至多台 k8s master以实现管理端高可用。

2.8.1:基于命令初始化高可用master方式:

image-20220116200649010

2.8.2:基于命令初始化高可用master方式:

# kubeadm init --apiserver-advertise-address=172.31.3.201 --control-plane-endpoint=172.31.3.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.14 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=12345.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

初始化过程中:

image-20220119211701588

集群初始化结果:

image-20220119211851537

2.8.3:基于文件初始化高可用master方式:

# kubeadm config print init-defaults #输出默认初始化配置
# kubeadm config print init-defaults > kubeadm-init.yaml #将默认配置输出至文件
# cat kubeadm-init.yaml #修改后的初始化文件内容

root@k8s-master1:~# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef # 默认token
  ttl: 24h0m0s	# token有效期
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.31.3.101  # 这个地址需要改成本机的,默认1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: kubeadm-master1.example.local
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.31.3.188:6443 # 这个改为vip地址,默认没有,手动添加
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers   # 镜像下载地址,默认k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.14  # 版本号
networking:
  dnsDomain: 12345.local # 本地域名,默认cluster.local
  serviceSubnet: 10.200.0.0/16  # service的网段,默认10.96.0.0/12
  podSubnet: 10.100.0.0/16 # pod网段,如果默认没有写,手动添加
scheduler: {}


root@k8s-master1:~# kubeadm init --config kubeadm-init.yaml #基于文件执行行k8smaster初始化

初始化完成:

image-20220119213101564

2.9:配置kube-config文件及网络组件:

无论使用命令还是文件初始化的k8s环境,无论是单机还是集群,需要配置一下kube-config文件及网络组件。

2.9.1:kube-config文件:

Kube-config文件中包含kube-apiserver地址及相关认证信息

root@k8s-master1:~#  mkdir -p $HOME/.kube
root@k8s-master1:~#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
root@k8s-master1:~#  sudo chown $(id -u):$(id -g) $HOME/.kube/config


root@kubeadm-master1:~# kubectl get node
NAME                            STATUS   ROLES                  AGE   VERSION
kubeadm-master1.example.local   Ready    control-plane,master   15m   v1.20.14


部署⽹网络组件flannel:
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# 切记flannel.yml文件的net网段,要修改为init时的pod地址段
 net-conf.json: |
    { 
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

root@k8s-master1:~# kubectl apply -f kube-flannel.yml 


验证master节点状态:
root@kubeadm-master1:~# kubectl get node
NAME                            STATUS   ROLES                  AGE   VERSION
kubeadm-master1.example.local   Ready    control-plane,master   15m   v1.20.14

2.9.2:当前maste生成证书用于添加新控制节点:

root@kubeadm-master1:~# kubeadm  init phase upload-certs --upload-certs
I0119 21:33:39.744915   11801 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
779f6b221dc16f86a3f8923c9bc752f62dd25efbaed91514f6212a8ce36bd9db

2.10:添加节点到k8s集群:

将其他的maser节点及node节点分别添加到k8集群中。

2.10.1:master节点 2 :

在另外一台已经安装了docker、kubeadm和kubelet的master节点上执行行以下操作:

# kubeadm join 172.31.3.188:6443 --token 4ovq8t.sv8u0rrs1gtu1odp \
    --discovery-token-ca-cert-hash sha256:fb96b095f9681105ad9efc7f43670e50095cbe730d66463d96ca84333f14f820 \
    --control-plane --certificate-key 779f6b221dc16f86a3f8923c9bc752f62dd25efbaed91514f6212a8ce36bd9db

2.10.2:master节点 3 :

root@k8s-master3:~# kubeadm join 172.31.3.188:6443 --token 4ovq8t.sv8u0rrs1gtu1odp \
    --discovery-token-ca-cert-hash sha256:fb96b095f9681105ad9efc7f43670e50095cbe730d66463d96ca84333f14f820 \
    --control-plane --certificate-key 779f6b221dc16f86a3f8923c9bc752f62dd25efbaed91514f6212a8ce36bd9db
root@k8s-master1:~# kubectl  get node

2.10.3:添加node节点:

各需要加入到k8s master集群中的node节点都要安装docker kubeadm kubelet ,因此都要重新执行安装docker kubeadm kubelet的步骤,即配置apt仓库、配置docker加速器、安装命令、启动kubelet服务。

添加命令为master端kubeadm init 初始化完成之后返回的添加命令:

root@k8s-node1:~# kubeadm join 172.31.3.188:6443 --token 4ovq8t.sv8u0rrs1gtu1odp \
    --discovery-token-ca-cert-hash sha256:fb96b095f9681105ad9efc7f43670e50095cbe730d66463d96ca84333f14f820


root@k8s-node2:~# kubeadm join 172.31.3.188:6443 --token 4ovq8t.sv8u0rrs1gtu1odp \
    --discovery-token-ca-cert-hash sha256:fb96b095f9681105ad9efc7f43670e50095cbe730d66463d96ca84333f14f820

root@k8s-node3:~# kubeadm join 172.31.3.188:6443 --token 4ovq8t.sv8u0rrs1gtu1odp \
    --discovery-token-ca-cert-hash sha256:fb96b095f9681105ad9efc7f43670e50095cbe730d66463d96ca84333f14f820

添加node节点的问题:

问题 1 :

一直卡在以下步骤:

root@k8s-master3:~#   kubeadm join 172.31.3.188:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:9c3bbf0723bbd2f1a4ebe7ba770241f00fadb242b3f0ff9ec162843d99bf1d06 \
	--control-plane --certificate-key e69f037bd10b7fefe9c1b0acaaef4120c1312145f92fb8287934ee91e8115033
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

问题 2 :

下载证书失败:

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace. This Secret might have expired. Please, run 
`kubeadm init phase upload-certs --upload-certs` on a control plane to generate a new one
To see the stack trace of this error execute with --v=5 or higher

2.10.4:验证当前node状态:

各Node节点会自自动加入到master节点,下载镜像并启动flannel,直到最终在master看到node处于Ready状态。

root@kubeadm-master1:~# kubectl get node
NAME                            STATUS   ROLES                  AGE     VERSION
kubeadm-master1.example.local   Ready    control-plane,master   24m     v1.20.14
kubeadm-master2.example.local   Ready    control-plane,master   4m27s   v1.20.14
kubeadm-master3.example.local   Ready    control-plane,master   3m1s    v1.20.14
node1.example.local             Ready    <none>                 2m7s    v1.20.14
node2.example.local             Ready    <none>                 2m3s    v1.20.14
node3.example.local             Ready    <none>                 2m3s    v1.20.14

2.10.5:验证当前证书状态:

root@kubeadm-master1:~# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 CONDITION
csr-df4g8   3m      kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ovq8t   Approved,Issued
csr-krgtk   3m5s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ovq8t   Approved,Issued
csr-mwxpj   3m1s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ovq8t   Approved,Issued
csr-vnh2h   5m24s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ovq8t   Approved,Issued
csr-xcvhp   3m58s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:4ovq8t   Approved,Issued

2.10.6:k8s创建容器并测试内部网络:

创建测试容器,测试网络连接是否可以通信:

注:单master节点要允许pod运行行在master节点
#kubectl taint nodes --all node-role.kubernetes.io/master-

root@k8s-master1:~# kubectl run net-test1 --image=alpine sleep 360000
root@k8s-master1:~# kubectl run net-test2 --image=alpine sleep 360000
root@k8s-master1:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
net-test1 1 /1 Running 0 36s 10 .100.4.2 k8s-
node3.example.local <none> <none>
net-test2 1 /1 Running 0 29s 10 .100.5.2 k8s-
node2.example.local <none> <none>

2.10.7:验证外部网络:

2.11:kubeadm init创建k8s集群流程:

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

2.12:最新版k8s安装

containerd+kubernetes1.24

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

2.12.1内核优化:

参考文档:容器运行时 | Kubernetes

通过运行 lsmod | grep br_netfilter 来验证 br_netfilter 模块是否已加载。

若要显式加载此模块,请运行 sudo modprobe br_netfilter

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

2.12.2:kubernetes安装

apt install kubeadm=1.24.3-00 kubelet=1.24.3-00  kubectl=1.24.3-00
# node节点不需要安装kubectl

2.12.3:容器运行时安装

以下运行时的安装(containerd,docker,),只需要安装一种即可

2.12.3.1:containerd安装

2.12.3.1.1:在线安装:

参考:在 Ubuntu |上安装 Docker 引擎Docker 文档

2.12.3.1.1.1:安装基础软件
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
2.12.3.1.1.2:添加官方秘钥:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
2.12.3.1.2.3:使用以下命令设置存储库
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
2.12.3.1.2.4:安装containerd
apt  update
apt install -y containerd.io
2.12.3.1.2.5:修改containerd配置文件
# 生成默认配置
containerd config default |tee /etc/containerd/config.toml

# 采用国内能够访问的镜像仓库
sed -i 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g'   /etc/containerd/config.toml


# 配置 systemd cgroup 驱动程序
sed -i 's@SystemdCgroup = false@SystemdCgroup = true@g' /etc/containerd/config.toml


# 修改crictl配置文件,获得containerd的sock信息
# cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
# 以上配置主要是为了解决以下报错

root@master1:~# crictl  images
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
2.12.3.1.2:离线安装:

containerd二进制下载

https://github.com/containerd/containerd

https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-1.6.8-linux-amd64.tar.gz

runc二进制下载

https://github.com/opencontainers/runc

https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64

# 解压到根目录
tar xvf cri-containerd-1.6.8-linux-amd64.tar.gz  -C /

# 创建配置文件路径
mkdir /etc/containerd

# 创建默认配置文件
containerd config default > /etc/containerd/config.toml

# 把其他机器修改好的配置文件复制过来,省的再修改了
root@master1:~# scp /etc/containerd/config.toml 172.31.3.112:/etc/containerd/
root@master1:~# scp /etc/crictl.yaml  172.31.3.112:/etc/

# 下载好的runc二进制文件,复制到/usr/local/bin目录下
root@node2:/data# cp runc.amd64 /usr/local/bin/runc

# 添加执行权限
root@node2:/data# chmod +x /usr/local/bin/runc

# runc二进制文件复制到其他目录,防止别的程序调用找不到
root@node2:/data# cp /usr/local/bin/runc  /usr/bin/
root@node2:/data# cp /usr/local/bin/runc  /usr/local/sbin/

2.12.3.2:docker安装

# 基础软件安装
sudo apt-get install -y \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# 添加秘钥
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# 添加仓库
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  
# 安装
apt update
apt install -y containerd.io docker-ce docker-ce-cli

root@master1:~# docker info
...
 Images: 0
 Server Version: 20.10.17
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs   # 安装完成以后的默认模式,这里需要改。因为kubelet的cgroupfs是systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
 runc version: v1.1.3-0-g6724737
 init version: de40ad0
 Security Options:
'''

image-20220814134603968

修改docker的cgroup Driver为systemd

cat >/etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver":"json-file",
  "log-opts":{
    "max-size":"100m"
  },
  "storage-driver":"overlay2",
  "registry-mirrors": ["https://yy4l17b0.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker-cn.com"]
}
EOF

修改完成

root@master1:~# docker info
...
 Logging Driver: json-file
 Cgroup Driver: systemd   # 这里能看到已经修改完成
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
...

下载cri-docker

下载地址:https://github.com/Mirantis/cri-dockerd

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz

# 解压二进制
tar xvf cri-dockerd-0.2.5.amd64.tgz

解压出来的文件复制到/usr/local/bin下

准备cri-docker.service

cat /etc/systemd/system/cri-docker.service

[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime-endpoint=unix://var/run/cri-dockerd.sock --image-pull-progress-deadline=30s --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 --cri-dockerd-root-directory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

准备socket文件/usr/lib/systemd/system/cri-docker.socket 非必须

cat /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
Partof=cri-docker.service

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

重启服务:

systemctl daemon-reload && systemctl restart cri-docker.service

# 测试效果
crictl --runtime-endpoint /var/run/cri-dockerd.sock ps
root@master1:~# cat /etc/crictl.yaml
runtime-endpoint: 'unix:///var/run/cri-dockerd.sock'
image-endpoint: 'unix:///var/run/cri-dockerd.sock'
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false

复制三个文件到所有节点

root@master1:~# cat copy.sh 
#!/bin/bash
#目标主机列表
IP="
172.31.3.102
172.31.3.103
172.31.3.111
172.31.3.112
172.31.3.113
"
for node in ${IP};do
	scp /usr/lib/systemd/system/cri-docker.socket  $node:/usr/lib/systemd/system/
	scp /etc/systemd/system/cri-docker.service $node:/etc/systemd/system/
  scp /etc/crictl.yaml  $node:/etc/
	ssh -o StrictHostKeyChecking=no  $node  'systemctl restart cri-docker'
done

2.12.3.4:CRI-O运行时安装:

参考:cri-o/install.md

# 添加源
OS=xUbuntu_20.04
VERSION=1.24
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list


# 导入秘钥
mkdir -p /usr/share/keyrings
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg


# 安装
apt-get update
apt-get install cri-o cri-o-runc


# 设置开机启动
systemctl enable crio.service --now


# 测试
crictl pull busybox

修改默认的网段

sed -i  's@10.85.0.0@10.244.0.0@g'  /etc/cni/net.d/100-crio-bridge.conf

修改基本配置

root@master1:~# grep -Env '^#|^$|[[:space:]]*#|^\[' /etc/crio/crio.conf

169:cgroup_manager = "systemd"
451:pause_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

# 重启服务,使配置生效
systemctl restart crio

验证:

root@master1:~# curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info

image-20220814162851057

root@master1:~# cat << EOF | tee /etc/crictl.yaml
runtime-endpoint: 'unix:///var/run/crio/crio.sock'
image-endpoint: 'unix:///var/run/crio/crio.sock'
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false
EOF

2.12.3.5:初始化k8s

containerd做为cri:不需要修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

docker作为cri:

需要修改kubelet

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
...
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --containerd=unix:///var/run/cri-dockerd.sock

# 重启kubelet
systemctl daemon-reload && systemctl restart kubelet

image-20220814144051834

如果cri是CRI-O

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
...
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS  --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint=unix:///var/run/crio/crio.sock --runtime-request-timeout=5m

# 重启kubelet
systemctl daemon-reload && systemctl restart kubelet
kubeadm  init --kubernetes-version=1.24.3 --apiserver-advertise-address=172.31.3.101  --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --cri-socket=unix:///run/containerd/containerd.sock  --control-plane-endpoint=172.31.3.188


# containerd --cri-socket=unix:///run/containerd/containerd.sock  这个路径,可以在cri的配置文件里找到,例如/etc/containerd/config.toml
# docker   --cri-socket=unix:///var/run/cri-dockerd.sock
# cri-o   --cri-socket=unix:///var/run/crio/crio.sock
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.31.3.188:6443 --token nkspov.xjrxo1zn9v6tv3mt \
	--discovery-token-ca-cert-hash sha256:0c9b7a2d8fe0b351f4b9268d70970ff02ff14cdece62807580bd8e01ae74c4e4 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.3.188:6443 --token nkspov.xjrxo1zn9v6tv3mt \
	--discovery-token-ca-cert-hash sha256:0c9b7a2d8fe0b351f4b9268d70970ff02ff14cdece62807580bd8e01ae74c4e4 
2.12.3.1.4:node节点安装

运行时安装

kubeadm join 172.31.3.188:6443 --token nkspov.xjrxo1zn9v6tv3mt \
	--discovery-token-ca-cert-hash sha256:0c9b7a2d8fe0b351f4b9268d70970ff02ff14cdece62807580bd8e01ae74c4e4 

加入master节点报错

docker作为运行时,master或者node加入时报错解决办法

root@master2:~# kubeadm join 172.31.3.188:6443 --token do5c32.gwd53cqdktz4aik2 --discovery-token-ca-cert-hash sha256:1f4e47d7c2c156af0118ebc7cc9381b2be19e52c5a88e51688d272a2a53bcd41 --control-plane
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher


# --cri-socket=unix:///var/run/cri-dockerd.sock 主要是加了这个参数
# master节点加入
kubeadm join 172.31.3.188:6443 --token do5c32.gwd53cqdktz4aik2 --discovery-token-ca-cert-hash sha256:1f4e47d7c2c156af0118ebc7cc9381b2be19e52c5a88e51688d272a2a53bcd41 --control-plane --cri-socket=unix:///var/run/cri-dockerd.sock

# node节点加入
kubeadm join 172.31.3.188:6443 --token do5c32.gwd53cqdktz4aik2 \
	--discovery-token-ca-cert-hash sha256:1f4e47d7c2c156af0118ebc7cc9381b2be19e52c5a88e51688d272a2a53bcd41  --cri-socket=unix:///var/run/cri-dockerd.sock

把master1的证书复制到新的master节点可以解决此问题

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

[failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn't load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn't load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher



# 解决办法


mkdir -p /etc/kubernetes/pki/etcd   # 新master节点创建证书目录
scp  /etc/kubernetes/pki/ca*  172.31.3.103:/etc/kubernetes/pki/
scp  /etc/kubernetes/pki/sa*  172.31.3.103:/etc/kubernetes/pki/

scp  /etc/kubernetes/pki/front-proxy-ca.*  172.31.3.103:/etc/kubernetes/pki/
scp  /etc/kubernetes/pki/etcd/ca.*  172.31.3.103:/etc/kubernetes/pki/etcd/

scp  /etc/kubernetes/pki/front-proxy-client.*  172.31.3.103:/etc/kubernetes/pki/


root@master1:~# cat > copy.sh << 'EOF'
#!/bin/bash
#目标主机列表
IP="
172.31.3.102
172.31.3.103
"
for node in ${IP};do
    ssh -o StrictHostKeyChecking=no  $node  'mkdir -p /etc/kubernetes/pki/etcd/'
	scp  /etc/kubernetes/pki/ca*  $node:/etc/kubernetes/pki/
	scp  /etc/kubernetes/pki/sa*  $node:/etc/kubernetes/pki/
    scp  /etc/kubernetes/pki/front-proxy-ca.*  $node:/etc/kubernetes/pki/
    scp  /etc/kubernetes/pki/etcd/ca.*  $node:/etc/kubernetes/pki/etcd/
    scp  /etc/kubernetes/pki/front-proxy-client.*  $node:/etc/kubernetes/pki/

done
EOF

三:部署dashboard:

https://github.com/kubernetes/dashboard

3.1:部署dashboard v2.2.0:

root@k8s-master1:/opt# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

root@k8s-master1:/opt# mv recommended.yaml dashboard-2.4.yaml
vim dashboard-2.4.yaml
# 修改yaml文件,用来外部访问。
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32002
  selector:
    k8s-app: kubernetes-dashboard

root@kubeadm-master1:~# kubectl apply -f dashboard-2.4.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

image-20220121115456058

root@kubeadm-master1:~# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.200.186.245   <none>        8000/TCP        2m11s
kubernetes-dashboard        NodePort    10.200.247.34    <none>        443:32002/TCP   2m11s
root@k8s-node3:~# ss -tnl #node节点验证端口号:
cat admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

kubectl apply -f admin-user.yaml

3.2:访问dashboard:

image-20220121134729944

3.3:获取登录token:

root@kubeadm-master1:~# kubectl get secret -A | grep admin    # 新版本已经不能获取到token
kubernetes-dashboard   admin-user-token-bvfz6                           kubernetes.io/service-account-token   3      2m13s

root@k8s-master1:~# kubectl describe secret  admin-user-token-bvfz6 -n kubernetes-dashboard
Name:         admin-user-token-bvfz6
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 896a6fee-3f77-43e9-818f-4d1e0d7368d5

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImNjdENmS2JaemZmMU1xdDF3WjdyUkdKcXJ4bnpHbnROVGhHT0VsSTJNbTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWJ2Zno2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4OTZhNmZlZS0zZjc3LTQzZTktODE4Zi00ZDFlMGQ3MzY4ZDUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.HslB4gaNpM7u2fj0saa0--ybpmwhDa6nBzb1XJFLXxmo99vhHbuDX0MoEFkEEyja7xVI2WXixOPiJv7-98uOLfBytEx-AbOsFqVuEFtpwa2U7TDOMgD8_1W6fewvvpI_9k3VCzHwS8UMuxbvGlTLWZvx1QpIAX3999IU-E2u9XAimOQgJi45LI1Kl_2INoECyxbphZlSqsFojIzsx2S001IYLwXdfYUU0pfI4SMOV2iK6QE3HGphYMUR1BXe1IiLt9sdFJT_jA7Jr2QhGTMpoyTHyI5bbOggV8IsFzxXrP4gQQoDQp9VmCF-PPL8r-vhoVEOTqL6gf3Kun8-61-ipg
ca.crt:     1066 bytes

# 新版本获取方式
root@master1:~# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IktNNmw4SmxOa2dSSDE3ZWU2bFBWZS1Ld0JST0RyRmdmVHdfNVQ5MTNyb1kifQ.eyJhdWQiOlsiYXBpIiwiaXN0aW8tY2EiXSwiZXhwIjoxNjU4ODU0ODA4LCJpYXQiOjE2NTg4NTEyMDgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2YyIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoia3ViZXJuZXRlcy1kYXNoYm9hcmQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiYWRtaW4tdXNlciIsInVpZCI6IjQ4ODA4MmExLTM3NmItNDc2My05NWM4LTE1NTIzMGMzMDE4MyJ9fSwibmJmIjoxNjU4ODUxMjA4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.EsX3sWNU2VqlKvEV0Nam3Z2pXTzRxUOnuCi6AQrbZIPoDNQDzd1ZHDKfGmSwcMcaA7-1h7tC7LFHWCEfp1ifIFys37ZOYax2AvkQZCUZvgtsXsT08oD9yLGRw3J6NNaLtkb_J4MDBFyEnZdan1eGQSCOa1MMLmKxhqIWOfIp-Pg9Ry_GciyfvaTQv7ukmGyIqgiW4tDMkKr9FIRQVZo5nIQ0IcMKFcs6HjwWyqCjGnnjoeIUrywCyeIJh24uDbHWZE_Bg_Q9BksvxZn1o0XS-PpUq2nE77hzxx_ScZRTNG8zIhVpL3ArX_RmLhHHLuFCNeAhryFSaCyufvxf_J_bOw

3.4:dashboard 界面:

image-20220121135034759

3.5 为kubernetes-dashboard页面增加过期时间

方法很多,最简单的就是登录后,找到Deployments 服务, 右侧界面会出现kubernetes-dashboard的项目,如果没出现,那么在namespace那里选择全部名称空间.

ports:
- containerPort: 8443
  protocol: TCP
args:
  - --auto-generate-certificates
  - --token-ttl=43200

# 添加args段. 时间单位为秒,我填的是12个小时的时间,基本一天登录一次就够了.

四:测试运行行Nginx+Tomcat:

https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/

测试运行行nginx,并最终可以将实现动静分离

4.1:运行行Nginx:

# pwd
/opt/kubdadm-yaml
root@kubeadm-master1:~# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.18.0
        ports:
        - containerPort: 80	#容器端口

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-nginx-service-label
  name: test-nginx-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30004	# 宿主机端口号
  selector:
    app: nginx


root@kubeadm-master1:~# kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment created
service/test-nginx-service created

root@kubeadm-master1:~# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
net-test1                           1/1     Running   1          40h
net-test2                           1/1     Running   1          40h
nginx-deployment-67dfd6c8f9-kbzg4   1/1     Running   0          44s


root@kubeadm-master1:~# kubectl get svc
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes           ClusterIP   10.200.0.1      <none>        443/TCP        40h
test-nginx-service   NodePort    10.200.103.35   <none>        80:30004/TCP   90s

4.2:运行行tomcat:

root@kubeadm-master1:~# cat tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: tomcat-deployment
  labels:
    app: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat
        ports:
        - containerPort: 8080

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: test-tomcat-service-label
  name: test-tomcat-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30005
  selector:
    app: tomcat 

root@kubeadm-master1:~# kubectl apply -f tomcat.yaml
deployment.apps/tomcat-deployment created
service/test-tomcat-service created

验证tomcat web界面:

image-20220121142912292

4.3:dashboard验证pod:

image-20220121142959778

image-20220121143022739

4.4:从dashboard进入容器:

image-20220122143903862

image-20220122143820912

验证pod通信:

4.5:进入tomcatpod生生成app

4.5.1:生成app:

root@kubeadm-master1:~# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1                            1/1     Running   2          2d16h
net-test2                            1/1     Running   2          2d16h
nginx-deployment-67dfd6c8f9-kbzg4    1/1     Running   1          24h
tomcat-deployment-6c44f58b47-pjvwg   1/1     Running   0          14m
tomcat-deployment-6c44f58b47-rks4q   1/1     Running   1          24h

4.5.2:验证app:

image-20220122144248973

4.6:Nginx实现动静分离:

4.6.1:Nginx配置:

root@kubeadm-master1:~# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1                            1/1     Running   2          2d16h
net-test2                            1/1     Running   2          2d16h
nginx-deployment-67dfd6c8f9-kbzg4    1/1     Running   1          24h
tomcat-deployment-6c44f58b47-pjvwg   1/1     Running   0          17m
tomcat-deployment-6c44f58b47-rks4q   1/1     Running   1          24h



root@kubeadm-master1:~# kubectl get svc
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes            ClusterIP   10.200.0.1      <none>        443/TCP        2d17h
test-nginx-service    NodePort    10.200.103.35   <none>        80:30004/TCP   24h
test-tomcat-service   NodePort    10.200.22.167   <none>        80:30005/TCP   24h


进入到nginx Pod:
# kubectl exec -it nginx-deployment-67dfd6c8f9-kbzg4 bash
root@nginx-deployment-67dfd6c8f9-kbzg4:/# cat /etc/issue
Debian GNU/Linux 10 \n \l

更新软件源并安装基础命令
# apt update
# apt install procps vim iputils-ping net-tools curl

测试service解析
# kubectl exec -it nginx-deployment-67dfd6c8f9-kbzg4 bash

root@nginx-deployment-67dfd6c8f9-kbzg4:/# ping test-tomcat-service
PING test-tomcat-service.default.svc.j12345.local (10.200.22.167) 56(84) bytes of data.


测试在nginx Pod通过tomcat Pod的service域名访问:
root@nginx-deployment-67dfd6c8f9-kbzg4:/# curl test-tomcat-service/m43/index.jsp
tomcat m43 app v2 

修改Nginx配置文件实现动静分离,Nginx一旦接受到有/tomcat的uri就转发给tomcat
root@nginx-deployment-67dfd6c8f9-kbzg4:/# vim /etc/nginx/conf.d/default.conf
location /m43 {
	proxy_pass http://test-tomcat-service;
}

测试nginx配置文件
root@nginx-deployment-67dfd6c8f9-kbzg4:/# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
重新加载配置⽂文件
root@nginx-deployment-67dfd6c8f9-kbzg4:/# nginx -s reload
2020/09/17 07:27:53 [notice] 559#559: signal process started

4.6.2:测试访问web页面:

image-20220122145114552

4.7:通过HAProxy实现高可用反向代理:

基于haproxy和keepalived实现高可用的反向代理,并访问到运行行在kubernetes集群中业务Pod,反向代理可以复用k8s的反向代理环境,生生产环境需要配置独立的反向代理服务器。

4.7.1:keepalived VIP配置:

为k8s中的服务配置单独的VIP

vrrp_instance VI_1 {
	state MASTER
	interface eth0
	garp_master_delay 10
	smtp_alert
	virtual_router_id 51
	priority 100
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass 1111
	}
	virtual_ipaddress {
		172.31.9.188 dev eth0 label eth0:1 
		172.31.9.189 dev eth0 label eth0:2  # 业务专门配置一个vip
	}
}

4.7.2:HAProxy配置:

listen k8s-apiserver-6443
	bind 172.31.3.188:6443
	mode tcp
	balance source
	server 172.31.3.201 172.31.3.201:6443 check inter 3s fall 3 rise 5
	server 172.31.3.202 172.31.3.202:6443 check inter 3s fall 3 rise 5
	server 172.31.3.203 172.31.3.203:6443 check inter 3s fall 3 rise 5


listen 12345-m43-80
    bind 172.31.3.188:80  # 这里可以使用上面专门配置的vip。
    mode tcp
    balance source
    server 172.31.3.201 172.31.3.201:30004 check inter 3s fall 3 rise 5
    server 172.31.3.202 172.31.3.202:30004 check inter 3s fall 3 rise 5
    server 172.31.3.203 172.31.3.203:30004 check inter 3s fall 3 rise 5

4.7.3:测试通过VIP访问:

image-20220122145440729

image-20220122145354286

五:k8s集群管理:

5.1:token管理:

# kubeadm   token  --help
  create  #创建token,默认有效期24⼩小时
  delete  #删除token
  generate #生成并打印token,但不在服务器器上创建,即将token⽤用于其他操作   list     #列列出服务器器所有的token

5.2:reset命令:

# kubeadm   reset  #还原kubeadm操作

5.3:查看证书有效期:

root@kubeadm-master1:~# kubeadm alpha certs check-expiration  # 之前老版本用这个命令
root@kubeadm-master1:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 19, 2023 13:16 UTC   362d            ca                      no  
apiserver                  Jan 19, 2023 13:16 UTC   362d            ca                      no  
apiserver-etcd-client      Jan 19, 2023 13:16 UTC   362d            etcd-ca                 no  
apiserver-kubelet-client   Jan 19, 2023 13:16 UTC   362d            ca                      no  
controller-manager.conf    Jan 19, 2023 13:16 UTC   362d            ca                      no  
etcd-healthcheck-client    Jan 19, 2023 13:16 UTC   362d            etcd-ca                 no  
etcd-peer                  Jan 19, 2023 13:16 UTC   362d            etcd-ca                 no  
etcd-server                Jan 19, 2023 13:16 UTC   362d            etcd-ca                 no  
front-proxy-client         Jan 19, 2023 13:16 UTC   362d            front-proxy-ca          no  
scheduler.conf             Jan 19, 2023 13:16 UTC   362d            ca                      no  

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 17, 2032 13:16 UTC   9y              no  
etcd-ca                 Jan 17, 2032 13:16 UTC   9y              no  
front-proxy-ca          Jan 17, 2032 13:16 UTC   9y              no   

5.4:更新证书有效期:

root@k8s-master1:~# kubeadm alpha certs renew --help  # 老版本命令
root@k8s-master1:~# kubeadm alpha certs renew all  # 老版本命令

root@kubeadm-master1:~# kubeadm  certs renew --help 
root@kubeadm-master1:~# kubeadm  certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

六:k8s升级:

升级k8s集群必须 先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证

6.1:升级准备:

在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级。

6.1.1:验证当前k8s master版本:

root@kubeadm-master1:~# kubeadm  version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

6.1.2:验证当前k8s node版本:

root@kubeadm-master1:~# kubectl  get node
NAME                            STATUS   ROLES                  AGE     VERSION
kubeadm-master1.example.local   Ready    control-plane,master   2d17h   v1.20.14
kubeadm-master2.example.local   Ready    control-plane,master   2d17h   v1.20.14
kubeadm-master3.example.local   Ready    control-plane,master   2d17h   v1.20.14
node1.example.local             Ready    <none>                 2d17h   v1.20.14
node2.example.local             Ready    <none>                 2d17h   v1.20.14
node3.example.local             Ready    <none>                 2d17h   v1.20.14

6.2:升级k8s master节点版本:

升级各k8s master节点版本

6.2.1:各master安装指定新版本kubeadm:

root@k8s-master1:~# apt-cache madison kubeadm  #查看k8s版本列列表
root@k8s-master1:~# apt-get install kubeadm=1.21.9-00  #安装新版本kubeadm 
root@k8s-master2:~# apt-get install kubeadm=1.21.9-00  #安装新版本kubeadm 
root@k8s-master3:~# apt-get install kubeadm=1.21.9-00  #安装新版本kubeadm

 
root@kubeadm-master1:~# kubeadm version #验证kubeadm版本
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:50:04Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

6.2.2:kubeadm升级命令使用帮助:

root@kubeadm-master1:~# kubeadm  upgrade --help
Upgrade your cluster smoothly to a newer version with this command

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]

Available Commands:
  apply       # 将Kubernetes集群升级到指定的版本  
  diff        # 显示将应用于现有静态pod清单的差异. 参考: kubeadm upgrade apply --dry-run
  node        # 集群中某个节点的升级命令
  plan        # 检查哪些版本可以升级,并验证当前集群是否可升级 . 如果要跳过internet检查,传入可选的[version]参数  

Flags:
  -h, --help   help for upgrade

Global Flags:
      --add-dir-header           If true, adds the file directory to the header of the log messages
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --one-output               If true, only write logs to their native severity level (vs also writing to each lower severity level)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm upgrade [command] --help" for more information about a command.

6.2.3:查看升级计划:

# kubeadm upgrade plan #查看升级计划

image-20220122153745540

6.2.4:执行行版本升级:

kubeadm upgrade apply v1.21.9

root@k8s-master1:~# kubeadm upgrade apply v1.21.9
root@k8s-master2:~# kubeadm upgrade apply v1.21.9
root@k8s-master3:~# kubeadm upgrade apply v1.21.9
图一:确认升级

image-20220122154150399

图二:升级完成

image-20220122154342974

6.2.5:验证镜像:

image-20220122154424107

6.3:升级k8s node节点版本:

升级客户端服务kubectl kubelet

6.3.1:验证当前node版本信息:

node节点还是1.20.14的旧版本:

root@kubeadm-master1:~# kubectl get node
NAME                            STATUS   ROLES                  AGE     VERSION
kubeadm-master1.example.local   Ready    control-plane,master   2d18h   v1.20.14
kubeadm-master2.example.local   Ready    control-plane,master   2d18h   v1.20.14
kubeadm-master3.example.local   Ready    control-plane,master   2d18h   v1.20.14
node1.example.local             Ready    <none>                 2d18h   v1.20.14
node2.example.local             Ready    <none>                 2d18h   v1.20.14
node3.example.local             Ready    <none>                 2d18h   v1.20.14

6.3.2:升级各node节点配置文件:

root@k8s-master1:~# kubeadm upgrade node --kubelet-version 1.21.14  # 之前版本升级需要这个命令,现在已经不需要了
Flag --kubelet-version has been deprecated, This flag is deprecated and will be
removed in a future version.
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system
get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your
internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm
config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version
"v1.19.2"...
Static pod: kube-apiserver-k8s-master1.example.local hash:
bd718897c3e739aa215bbf106b99af0b
Static pod: kube-controller-manager-k8s-master1.example.local hash:
76df66f8d67a49bceeef05e116f41eb9
Static pod: kube-scheduler-k8s-master1.example.local hash:
8ac5a17f7182152ddfa543b14ad116ab
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd
version "3.4.13-0" is not newer than the currently installed "3.4.13-0".
Skipping etcd upgrade
[upgrade/staticpods] Writing new Static Pod manifests to
"/etc/kubernetes/tmp/kubeadm-upgraded-manifests871480007"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal,
skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are
equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal,
skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[upgrade] Using kubelet config version 1 .19.2, while kubernetes-version is
v1.19.2
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-
config-1.19" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your
package manager.

# 新版本的k8s-master节点直接apt升级就可以
root@kubeadm-master1:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00
root@kubeadm-master3:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00
root@kubeadm-master3:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00

6.3.3:各Node节点升级kubelet二进制包

root@k8s-node1:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00
root@k8s-node2:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00
root@k8s-node3:~# apt install kubelet=1.21.9-00 kubeadm=1.21.9-00 kubelet=1.21.9-00

6.3.4:验证当前k8s版本:

root@kubeadm-master1:~# kubectl get nodes
NAME                            STATUS   ROLES                  AGE     VERSION
kubeadm-master1.example.local   Ready    control-plane,master   2d18h   v1.21.9
kubeadm-master2.example.local   Ready    control-plane,master   2d18h   v1.21.9
kubeadm-master3.example.local   Ready    control-plane,master   2d18h   v1.21.9
node1.example.local             Ready    <none>                 2d18h   v1.21.9
node2.example.local             Ready    <none>                 2d18h   v1.21.9
node3.example.local             Ready    <none>                 2d18h   v1.21.9


标题:kubeadm安装升级
作者:harbor
地址:http://www.ipfshyys.com/articles/2022/10/16/1665912767033.html