Kubernetes 1.20.4 HA集群搭建基于[Ubuntu]&Rancher HA部署

2021/7/16 7:08:42

本文主要是介绍Kubernetes 1.20.4 HA集群搭建基于[Ubuntu]&Rancher HA部署,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

环境

Ubuntu: 2020.4

Docker: 19.3

Kubernetes: v1.20.4

Rancher: v2.5.8

版本选择

k8s版本选择

如果要使用rancher请先到官网查看不同的版本支持的k8s的版本

https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.5.8/

在这里插入图片描述

可以看到rancher v2.5.8 支持的k8s最新版是v1.20.4

Docker版本选择

https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG

找到k8s版本的changelog 查看k8s支持的docker版本

系统初始化

设置系统主机名以及 Host 文件的相互解析

hostnamectl  set-hostname  k8s-01
vim /etc/hosts


192.168.70.20 k8s-01
192.168.70.21 k8s-02
192.168.70.22 k8s-03

添加多个ip解析

关闭防火墙

systemctl  stop firewalld  &&  systemctl  disable firewalld

设置iptables

sudo apt-get install -y  ipvsadm  iptables  wget  vim  net-tools 

关闭缓存

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭 SELINUX

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

调整内核参数,对于 K8S

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM	
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

调整系统时区

保证多个机器的时间一致

# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog 
systemctl restart crond

设置 rsyslogd 和 systemd journald

mkdir -p /etc/systemd

cat > journald.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间 10G
SystemMaxUse=10G

# 单日志文件最大 200M
SystemMaxFileSize=200M

# 日志保存时间 2 周
MaxRetentionSec=2week

# 不将日志转发到 syslog
ForwardToSyslog=no
EOF

mv journald.conf /etc/systemd/

Docker安装

更新 apt 包索引

sudo apt-get update

安装 apt 依赖包,用于通过HTTPS来获取仓库:

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

添加 Docker 的官方 GPG 密钥:

没有这一步apt update的时候会报错

curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

使用以下指令设置稳定版仓库

sudo add-apt-repository \
   "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/ \
  $(lsb_release -cs) \
  stable"

再次更新 apt 包索引。

sudo apt-get update

最新版安装

sudo apt-get install docker-ce docker-ce-cli containerd.io

安装指定版本

apt-cache madison docker-ce

在这里插入图片描述

5:20.10.73-0ubuntu-focal 就是<VERSION_STRING>

sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io

k8s集群搭建

高可用搭建

在这里插入图片描述

load balance采用官方推荐的HA proxy + keepalives策略

https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing

HAProxy安装

haproxy使用docker部署 ,多个master节点都需要部署

访问haproxy的dockerhub拉取合适的镜像

​ https://registry.hub.docker.com/_/haproxy?tab=tags&page=1&ordering=last_updated

创建haproxy.cfg配置文件

# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#前页面配置 访问/dbs
listen  admin_stats
    bind  0.0.0.0:7050
    mode        http
    stats uri   /dbs
    stats realm     Global\ statistics
    stats auth  admin:123456
    
#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:${APISERVER_DEST_PORT}
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
# 注意 如果在集群刚部署时apiserver下面只留一个server 全部部署好后在加上
#---------------------------------------------------------------------
backend apiserver
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server  k8s-01 ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} 
        server  k8s-02 ${HOST2_ADDRESS}:${APISERVER_SRC_PORT} 
        server  k8s-03 ${HOST3_ADDRESS}:${APISERVER_SRC_PORT} 

编写启动脚本

该脚本默认haproxy.cfg在/opt/k8s/haproxy 目录下

vim start-haproxy.sh 


#!/bin/bash
HOST1_ADDRESS=192.168.70.20
HOST2_ADDRESS=192.168.70.21
HOST3_ADDRESS=192.168.70.22
APISERVER_SRC_PORT=6443
APISERVER_DEST_PORT=6444

docker run -d --restart=always --name HAProxy-K8S -p 6444:6444  -p 7050:7050 \
        -e HOST1_ADDRESS=$HOST1_ADDRESS \
        -e HOST2_ADDRESS=$HOST2_ADDRESS \
        -e HOST3_ADDRESS=$HOST3_ADDRESS \
        -e APISERVER_DEST_PORT=$APISERVER_DEST_PORT \
        -e APISERVER_SRC_PORT=$APISERVER_SRC_PORT \
        -v /opt/k8s/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \
        haproxy

启动

chmod +x start-haproxy.sh

sudo ./start-haproxy.sh

验证

浏览器访问ip:7050/dbc 用户名 admin 密码 123456

有用的网站

配置: https://blog.csdn.net/get_set/article/details/107998958

Keepalived安装

所有master节点都需要安装

安装

sudo apt install -y keepalived

创建配置文件

cat >/etc/keepalived/keepalived.conf<<"EOF"
! Configuration File for keepalived
global_defs {
   #可以填写你的hostname
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}
vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   #间隔
   interval 3
   #权重
    weight -2
   
    fall 3
     	 
}
vrrp_instance VI_1 {
   # 多个master 只有一个配置为MASTER其他为BACKUP
   state MASTER
   # ifconfig查看你的当前网卡 配置在这里
   interface eth0
   #当前ip地址
   mcast_src_ip 192.168.1.151
   #保证每个机器上的virtual_router_id不一样
   virtual_router_id 51
   priority 101
   advert_int 2
   authentication {
       auth_type PASS
       auth_pass 123456
   }
   #配置为和你当前网段相同的ip
   virtual_ipaddress {
       192.168.1.160
   }
   #这部分在集群没有打起来之前需要先注释掉
   track_script {
      chk_apiserver
   }
}
EOF

启动

systemctl enable keepalived --now

验证

ip a

在这里插入图片描述

在你当前网卡下找到你配置ip,如果能够看到代表配置生效,否则就要排查问题.

有用的网站

配置: https://www.keepalived.org/manpage.html

原理: https://www.cnblogs.com/rexcheny/p/10778567.html

k8s集群搭建

添加阿里源

sudo apt-add-repository "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg   | sudo apt-key add -

sudo apt-get update

安装kubeadm kubectl kubelet

最新版安装

sudo apt  -y install  kubeadm kubectl kubelet

systemctl enable kubelet.service

安装指定版本

apt-cache madison kubeadm

sudo apt -y install  kubeadm=<VERSION_STRING> kubectl=<VERSION_STRING> kubelet=<VERSION_STRING>

systemctl enable kubelet.service

查询k8s版本需要的镜像

kubeadm --kubernetes-version=v1.21.0 config images list 

在这里插入图片描述

下载镜像

因为k8s.gcr.io需要翻墙,所以从阿里云下载,所有master节点都需要下载

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
docker pull registry.aliyuncs.com/google_containers/pause:3.2
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull coredns/coredns:1.7.0

之所以从dockerhub上下载dns是因为aliyun下没有它的镜像,之后讲coredns的镜像重新打tag

docker images|grep  coredns

docker tag 镜像id docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0

获取k8s的配置文件

kubeadm config print init-defaults > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  #修改为当前ip地址
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  #修改为当前的hostname
  name: k8s-02
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
#默认配置下没有这个配置,如果是集群需要手动加上,否则不需要
#这个ip为keepalived配置的vip,端口为haproxy监听的端口
controlPlaneEndpoint: 192.168.70.100:6444
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.4
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  #该位置也需要手动加上是fannel的配置
  podSubnet: 10.244.0.0/16
scheduler: {}
---
#下面的配置是启动ipvs的配置,每个版本都不一样
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

ipvs配置查看

https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md

集群初始化

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

启动成功会出下如下画面

在这里插入图片描述

这时一个master节点已经搭建成功,首先指定下列命令

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
 root用户指定
 export KUBECONFIG=/etc/kubernetes/admin.conf

其他机器加入到master,执行

 在指定该命令之前,请确保你的第一个主节点上的haproxy代理的一个server
 
 kubeadm join 192.168.70.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:db5964c163eaf5a280616ab0736dae7056293855a6d1811d78496e070bc94202 \
    --control-plane --certificate-key 7f819f2d00cfdf75e8318a1dc40bc41dbf2540084742a5fa31c68d49316ff390

加入node节点执行

kubeadm join 192.168.70.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:db5964c163eaf5a280616ab0736dae7056293855a6d1811d78496e070bc94202 

所有节点全部加入以后执行

kubectl get nodes

可以看到集群是NotReady的状态,需要部署flannel

Flannel部署

官网: https://github.com/flannel-io/flannel

部署

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

网络原因可以先将kube-flannel.yml下载下来

kubectl apply -f  kube-flannel.yml

部署完成后

kubectl get nodes

Rancher部署

helm安装

https://helm.sh/zh/docs/intro/install/

ingress-controller安装

官网: https://kubernetes.github.io/ingress-nginx/deploy/

github: https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/index.md

执行Bare-metal下的安装命令

Rancher安装

中文文档: http://docs.rancher.cn/rancher2.5/

安装: http://docs.rancher.cn/docs/rancher2.5/installation/install-rancher-on-k8s/_index

  • latest: 建议在尝试新功能时使用。
  • stable: 建议在生产环境中使用。(推荐)
  • alpha: 未来版本的实验性预览。
helm repo add rancher-<CHART_REPO> http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/<CHART_REPO>

为 Rancher 创建 Namespace

kubectl create namespace cattle-system

创建证书

创建自签名证书

http://docs.rancher.cn/docs/rancher2.5/installation/resources/advanced/self-signed-ssl/_index/

#!/bin/bash -e

help ()
{
    echo  ' ================================================================ '
    echo  ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;'
    echo  ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'
    echo  ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'
    echo  ' --ssl-size: ssl加密位数,默认2048;'
    echo  ' --ssl-cn: 国家代码(2个字母的代号),默认CN;'
    echo  ' 使用示例:'
    echo  ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ '
    echo  ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
    echo  ' ================================================================'
}

case "$1" in
    -h|--help) help; exit;;
esac

if [[ $1 == '' ]];then
    help;
    exit;
fi

CMDOPTS="$*"
for OPTS in $CMDOPTS;
do
    key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
    value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
    case "$key" in
        --ssl-domain) SSL_DOMAIN=$value ;;
        --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
        --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
        --ssl-size) SSL_SIZE=$value ;;
        --ssl-date) SSL_DATE=$value ;;
        --ca-date) CA_DATE=$value ;;
        --ssl-cn) CN=$value ;;
    esac
done

# CA相关配置
CA_DATE=${CA_DATE:-3650}
CA_KEY=${CA_KEY:-cakey.pem}
CA_CERT=${CA_CERT:-cacerts.pem}
CA_DOMAIN=cattle-ca

# ssl相关配置
SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'}
SSL_DATE=${SSL_DATE:-3650}
SSL_SIZE=${SSL_SIZE:-2048}

## 国家代码(2个字母的代号),默认CN;
CN=${CN:-CN}

SSL_KEY=$SSL_DOMAIN.key
SSL_CSR=$SSL_DOMAIN.csr
SSL_CERT=$SSL_DOMAIN.crt

echo -e "\033[32m ---------------------------- \033[0m"
echo -e "\033[32m       | 生成 SSL Cert |       \033[0m"
echo -e "\033[32m ---------------------------- \033[0m"

if [[ -e ./${CA_KEY} ]]; then
    echo -e "\033[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 \033[0m"
    mv ${CA_KEY} "${CA_KEY}"-bak
    openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
else
    echo -e "\033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} \033[0m"
    openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
fi

if [[ -e ./${CA_CERT} ]]; then
    echo -e "\033[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 \033[0m"
    mv ${CA_CERT} "${CA_CERT}"-bak
    openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
else
    echo -e "\033[32m ====> 2. 生成新的CA证书 ${CA_CERT} \033[0m"
    openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
fi

echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m"
cat > ${SSL_CONFIG} <<EOM
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
EOM

if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} ]]; then
    cat >> ${SSL_CONFIG} <<EOM
subjectAltName = @alt_names
[alt_names]
EOM
    IFS=","
    dns=(${SSL_TRUSTED_DOMAIN})
    dns+=(${SSL_DOMAIN})
    for i in "${!dns[@]}"; do
      echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}
    done

    if [[ -n ${SSL_TRUSTED_IP} ]]; then
        ip=(${SSL_TRUSTED_IP})
        for i in "${!ip[@]}"; do
          echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
        done
    fi
fi

echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m"
openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}

echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m"
openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}

echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m"
openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \
    -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \
    -days ${SSL_DATE} -extensions v3_req \
    -extfile ${SSL_CONFIG}

echo -e "\033[32m ====> 7. 证书制作完成 \033[0m"
echo
echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m"
echo "----------------------------------------------------------"
echo "ca_key: |"
cat $CA_KEY | sed 's/^/  /'
echo
echo "ca_cert: |"
cat $CA_CERT | sed 's/^/  /'
echo
echo "ssl_key: |"
cat $SSL_KEY | sed 's/^/  /'
echo
echo "ssl_csr: |"
cat $SSL_CSR | sed 's/^/  /'
echo
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m"
cat ${CA_CERT} >> ${SSL_CERT}
echo "ssl_cert: |"
cat $SSL_CERT | sed 's/^/  /'
echo

echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m"
echo "cp ${SSL_DOMAIN}.key tls.key"
cp ${SSL_DOMAIN}.key tls.key
echo "cp ${SSL_DOMAIN}.crt tls.crt"
cp ${SSL_DOMAIN}.crt tls.crt

生成证书

/start.sh --ssl-domain=rancher.test.cn --ssl-trusted-domain=rancher.test.cn --ssl-trusted-ip=192.168.20.101,192.168.21.30,192.168.21.31 --ssl-size=2048 --ssl-date=3650

生效

kubectl -n cattle-system create secret tls tls-rancher-ingress   --cert=tls.crt   --key=tls.key
kubectl -n cattle-system create secret generic tls-ca   --from-file=cacerts.pem=./cacerts.pem

使用helm安装rancher

这里的hostname需要和上面生成的证书域名保持一致

helm install rancher rancher-stable/rancher   --namespace cattle-system   --set hostname=rancher.test.cn   --set ingress.tls.source=secret   --set privateCA=true

验证

kubectl -n cattle-system rollout status deploy/rancher

若出现下面信息(successfully rolled out)则表示成功,否则继续等待:

deployment "rancher" successfully rolled out

检查 deployment 的状态:

kubectl -n cattle-system get deploy rancher

若出现下面信息则表示安装成功

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
rancher   3/3     3            3           2d1h

UI 访问

正常情况下在电脑下修改host 配置域名和ip 通过域名访问即可

https://域名

非正常情况

kubectl get svc -n cattle-system

在这里插入图片描述

从图上可以rancher的port 是 80 和443 端口,但是它的类型是ClusterIP.这代表这个ip和端口都是虚拟的我们是不能直接访问的,需要通过ingress暴漏一个NodePort的端口让我们访问

暴漏虚拟端口

vim service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kubectl apply -f  service-nodeport.yaml
kubectl get svc -n ingress-nginx

在这里插入图片描述

如上图所示,该服务的类型是NodePort ,我们可以通过30001访问80端口

https://域名:30002访问



这篇关于Kubernetes 1.20.4 HA集群搭建基于[Ubuntu]&Rancher HA部署的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程