当前位置:首页>Linux>Rocky Linux+kubeadm实现1.35.0高可用架构深度实践

Rocky Linux+kubeadm实现1.35.0高可用架构深度实践

  • 2026-02-19 11:08:17
Rocky Linux+kubeadm实现1.35.0高可用架构深度实践

Kubernetes 高可用集群安装指南

本文详细介绍了如何在 Rocky Linux 系统上部署一个高可用的 Kubernetes 集群,包括系统配置、组件安装、网络设置等完整步骤。

一、安装说明

部署环境

本次部署采用的系统及组件版本:

项目
版本
操作系统
Rocky Linux 10.1
内核版本
6.12.0
Kubernetes
v1.35.0
containerd
2.2.1
CNI 插件
v1.9.0
crictl
1.35.0
etcd
3.6.6-0

离线安装包:链接: https://pan.baidu.com/s/19CjX1ImiwQTWqDleWiBwgg提取码: 8888

二进制安装请查看这个链接:https://blog.csdn.net/qq_39965541/article/details/157136178?spm=1011.2415.3001.5331

二、准备开始

  • Linux 主机,兼容 Debian / RedHat 系列或其他无包管理器的发行版。
  • Rocky Linux 系统为最小化安装可能缺少常用命令用到了再安装即可。
  • 如果不在 Rocky 类似系统上,请确认内核版本 ≥ v5.13
  • 每台机器 ≥ 4 GB RAM;控制平面节点建议 ≥ 4 CPU。
  • 集群中的所有机器之间必须能网络互通。
  • 所有节点 MAC 地址和 product_uuid 必须唯一。
  • 禁用 swap。

2.1 集群节点规划

节点角色
主机名
IP 地址
组件
Master1
master01
192.168.1.11
kube-apiserver, kube-controller-manager, kube-scheduler, etcd
Master2
master02
192.168.1.12
kube-apiserver, kube-controller-manager, kube-scheduler, etcd
Master3
master03
192.168.1.13
kube-apiserver, kube-controller-manager, kube-scheduler, etcd
Worker1
node01
192.168.1.14
kubelet, kube-proxy, container runtime
Worker2
node02
192.168.1.15
kubelet, kube-proxy, container runtime
VIP
-
192.168.1.100
Keepalived VIP
  • Service 网段:10.96.0.0/12
  • Pod 网段:10.244.0.0/16

2.2 系统环境配置(所有节点)

2.2.1 系统版本确认

cat /etc/redhat-release# 应为 Rocky Linux release 10.1 (Red Quartz)

2.2.2 修改 /etc/hosts

在所有节点:

echo'192.168.1.11 master01192.168.1.12 master02192.168.1.13 master03192.168.1.14 worker1192.168.1.15 worker2192.168.1.100 master-lb' >> /etc/hosts

2.2.3 关闭防火墙与 SELinux

systemctl disable --now firewalldsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2.2.4 禁用 swap

swapoff -ased -i.bak '/swap/s/^/#/' /etc/fstab

2.2.5 时间同步

  • 安装 ntp 或
  • 使用 chronyd
dnf install -y ntpd# 或systemctl status chronyd

2.2.6 系统限制(limits)

echo"* soft nofile 65536" >> /etc/security/limits.confecho"* hard nofile 65536" >> /etc/security/limits.confecho"* soft nproc 65536"  >> /etc/security/limits.confecho"* hard nproc 65536"  >> /etc/security/limits.confecho"* soft memlock unlimited" >> /etc/security/limits.confecho"* hard memlock unlimited" >> /etc/security/limits.conf

2.2.7 无密码 SSH 登陆(Master01 -> 所有节点)

在主控机 Master01 上:

ssh-keygen -t rsa   # 回车全部默认值for i in master01 master02 master03; do  ssh-copy-id -i ~/.ssh/id_rsa.pub $idone

2.3 内核与 ipvs 配置

2.3.1 安装 ipvsadm 及相关模块

dnf install -y ipvsadm ipset sysstat conntrack libseccompmodprobe ip_vsmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_shmodprobe nf_conntrack

2.3.2 ipvs 模块开机加载

cat > /etc/modules-load.d/ipvs.conf <<EOFip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ftpip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl enable --now systemd-modules-load.service

检查加载情况:

lsmod | grep -e ip_vs -e nf_conntrack

2.3.3 配置内核参数

在所有节点创建 /etc/sysctl.d/k8s.conf

cat <<EOF > /etc/sysctl.d/k8s.conf## 网络优化 启用 IPv4 数据包转发 CNI 网络插件如 Calico/Cilium 依赖net.ipv4.ip_forward = 1net.ipv4.tcp_tw_reuse = 2net.ipv4.tcp_timestamps = 1net.ipv4.tcp_fin_timeout = 30net.ipv4.conf.all.route_localnet = 1net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.ip_conntrack_max = 65536net.core.somaxconn = 65535net.core.netdev_max_backlog = 65536# 增加 SYN 半连接队列长度net.ipv4.tcp_max_syn_backlog = 65536net.ipv4.tcp_rmem = 4096 12582912 16777216net.ipv4.tcp_wmem = 4096 12582912 16777216net.netfilter.nf_conntrack_max = 1048576net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_intvl = 30net.ipv4.tcp_keepalive_probes = 10# 文件系统fs.file-max = 2097152fs.nr_open = 52706963fs.may_detach_mounts = 1fs.inotify.max_user_instances = 8192fs.inotify.max_user_watches = 524288# 内存管理vm.swappiness = 0vm.max_map_count = 262144vm.overcommit_memory = 1vm.panic_on_oom = 0kernel.panic = 10# 容器支持kernel.pid_max = 4194304net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-arptables = 1# Kubernetes 要求net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.default.rp_filter = 0kernel.softlockup_panic = 1EOFsysctl --system

确认内核模块仍已加载:

lsmod | grep --color=auto -e ip_vs -e nf_conntrack

三、容器运行时安装

3.1 安装 containerd + CRI 工具

配置内核参数转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOFsudo modprobe overlaysudo modprobe br_netfilter# 应用 sysctl 参数而不重新启动sudo sysctl --system

通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:

lsmod | grep br_netfilterlsmod | grep overlay

查看内核参数是否为 1

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

3.1.1 下载与安装 containerd

wget https://github.com/containerd/containerd/releases/download/v2.2.1/containerd-2.2.1-linux-amd64.tar.gztar xvf containerd-2.2.1-linux-amd64.tar.gzmv bin/* /usr/local/bin/mkdir /etc/containerdcontainerd config default > /etc/containerd/config.toml

3.1.2 containerd 启动文件

cat > /usr/lib/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target dbus.service[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable --now containerd

3.1.3 安装 runc

下载地址:https://github.com/opencontainers/runc/releases/download/v1.4.0/runc.amd64

install -m 755 runc.amd64 /usr/local/sbin/runc

3.1.4 安装 CNI 插件

下载地址:https://github.com/containernetworking/plugins/releases/download/v1.9.0/cni-plugins-linux-amd64-v1.9.0.tgz

mkdir -p /opt/cni/bintar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.9.0.tgz

3.1.5 安装 crictl

下载地址:https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.35.0/crictl-v1.35.0-linux-amd64.tar.gz

tar -xf crictl-v1.35.0-linux-amd64.tar.gz -C /usr/local/bincat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///var/run/containerd/containerd.sockimage-endpoint: unix:///var/run/containerd/containerd.socktimeout: 30debug: falsepull-image-on-create: falseEOF

3.1.6 启用 systemd cgroup 驱动

cgroup 详细介绍请查看 官方文档

编辑 /etc/containerd/config.toml 中对应部分:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]    ShimCgroup = ''  # 在这行下面添加    SystemdCgroup = true # 默认是没有这行的

重启 containerd:

systemctl restart containerd

四、高可用组件配置

4.1 安装 HAProxy + Keepalived

在所有 Master 节点上:

dnf install -y haproxy keepalived

4.1.1 配置 HAProxy

所有 Master 节点共享相同配置文件 /etc/haproxy/haproxy.cfg,内容如下:

cat > /etc/haproxy/haproxy.cfg << EOFglobal  maxconn 2000ulimit-n 16384log 127.0.0.1 local0 err  stats timeout 30sdefaultslog global  mode http  option httplog  timeout connect 5000  timeout client 50000  timeout server 50000  timeout http-request 15s  timeout http-keep-alive 15sfrontend k8s-masterbind 0.0.0.0:8443  mode tcp  option tcplog  tcp-request inspect-delay 5s  default_backend k8s-masterbackend k8s-master  mode tcp  balance roundrobin  option httpchk GET /healthz  http-check expect status 200  option tcp-check  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100  server master01 192.168.1.11:6443 check  server master02 192.168.1.12:6443 check  server master03 192.168.1.13:6443 checkEOF

4.1.2 Keepalived 配置(不同节点略有差异)

Master01:

cat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5    weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state MASTER    interface ens33    mcast_src_ip 192.168.1.11    virtual_router_id 51    priority 100    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.1.100    }    track_script {        chk_apiserver    }}EOF

Master02:

cat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state BACKUP    interface ens33    mcast_src_ip 192.168.1.12    virtual_router_id 51    priority 99    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.1.100    }    track_script {      chk_apiserver } }EOF

Master03:

cat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state BACKUP    interface ens33    mcast_src_ip 192.168.1.13    virtual_router_id 51    priority 98    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.1.100    }    track_script {      chk_apiserver } }EOF

健康检查脚本 /etc/keepalived/check_apiserver.sh

 cat > /etc/keepalived/check_apiserver.sh  << EOF#!/bin/basherr=0for k in $(seq 1 3)do    check_code=$(pgrep haproxy)if [[ $check_code == "" ]]; then        err=$(expr $err + 1)        sleep 1continueelse        err=0breakfidoneif [[ $err != "0" ]]; thenecho"systemctl stop keepalived"    /usr/bin/systemctl stop keepalivedexit 1elseexit 0fiEOFchmod +x /etc/keepalived/check_apiserver.sh

启动服务:

systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalivedsystemctl status keepalived haproxy

测试 VIP 是否可 ping 通:

ping 192.168.1.100

五、Kubernetes 核心组件安装

5.1 安装 kubeadm, kubelet, kubectl

5.1.1 配置 yum 源

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://pkgs.k8s.io/core:/stable:/v1.35/rpm/enabled=1gpgcheck=1gpgkey=https://pkgs.k8s.io/core:/stable:/v1.35/rpm/repodata/repomd.xml.keyexclude=kubelet kubeadm kubectl cri-tools kubernetes-cniEOF

5.1.2 安装并启用服务

dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubelet

5.2 初始化 Master01(控制面第一个节点)

5.2.1 查看所需镜像并预先拉取

kubeadm config images list

所需镜像(版本 v1.35.0):

  • registry.k8s.io/kube-apiserver:v1.35.0
  • registry.k8s.io/kube-controller-manager:v1.35.0
  • registry.k8s.io/kube-scheduler:v1.35.0
  • registry.k8s.io/kube-proxy:v1.35.0
  • registry.k8s.io/coredns/coredns:v1.13.1
  • registry.k8s.io/pause:3.10.1
  • registry.k8s.io/etcd:3.6.6-0   这个命令输出的镜像版本是3.6.6,这个再下载镜像的时候报错加上 -0 就好了 估计以后就没问题了

导入镜像示例:

ctr -n k8s.io image import 加镜像名字 # 或者导入自己的镜像仓库在pull下来# 倒入好镜像以后用crictl查看ctr也能查看,但是不直观crictl images# ctr 好像又命名空间的概念 我也没研究过 要是嫌麻烦可以安装docker客户端工具管理containerdctr -n k8s.io images ls  

5.2.2 生成并修改初始化配置文件

kubeadm config print init-defaults > kubeadm-init.yaml

修改生成的 kubeadm-init.yaml,例子如下:

当前配置文件是 堆叠ETCD 说人话就是 内部ETCD  这个可以修改为外部ETCD 也就是二进制安装的ETCD

cat  > ./kubeadm-init.yaml << EOFapiVersion: kubeadm.k8s.io/v1beta4# 引导令牌(保持默认即可)bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authentication# 本地API端点kind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.1.11  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/containerd/containerd.sock  imagePullPolicy: IfNotPresent  imagePullSerial: true  name: master01  taints: null# 超时设置(保持默认即可)timeouts:  controlPlaneComponentHealthCheck: 4m0s  discovery: 5m0s  etcdAPICall: 2m0s  kubeletHealthCheck: 4m0s  kubernetesAPICall: 1m0s  tlsBootstrap: 5m0s  upgradeManifests: 5m0s---apiServer: {}apiVersion: kubeadm.k8s.io/v1beta4caCertificateValidityPeriod: 87600h0m0scertificateValidityPeriod: 8760h0m0scertificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}encryptionAlgorithm: RSA-2048etcd:local:    dataDir: /var/lib/etcdimageRepository: registry.k8s.iokind: ClusterConfigurationkubernetesVersion: 1.33.5networking:  dnsDomain: cluster.local  serviceSubnet: 10.96.0.0/12  podSubnet: 10.244.0.0/16 # 如果不是高可用集群  删除这行即可controlPlaneEndpoint: "192.168.1.100:8443"proxy: {}scheduler: {}EOF

外部 etcd 配置:

etcd:local:dataDir:/var/lib/etcd

替换为 external 配置块,并填写你的 etcd 集群 endpoints 及证书路径:

etcd:external:endpoints:-https://etcd-node1.example.com:2379-https://etcd-node2.example.com:2379-https://etcd-node3.example.com:2379caFile:/etc/kubernetes/pki/etcd/ca.crtcertFile:/etc/kubernetes/pki/apiserver-etcd-client.crtkeyFile:/etc/kubernetes/pki/apiserver-etcd-client.key

说明:

  • endpoints:外部 etcd 集群各成员的访问地址列表(至少两个或三个实例以实 HA)。
  • caFile:etcd CA 证书文件(用于 TLS 客户端验证)。
  • certFile / keyFile:apiserver 与 etcd 通信所需的客户端证书和密钥。
  • local 和 external 是互斥的。一旦使用 external,需要删除同一配置文件中保留 local etcd 配置。

5.2.3 执行初始化

  • 初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可。
  • 初始化的时候可以看详细日志 在后面添加 --v=5  即可
kubeadm init --config kubeadm-init.yaml --upload-certs

若初始化失败,可重置再来:

kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube

初始化成功后,配置 kubeconfig:

mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config# 或者如果是 root 用户export KUBECONFIG=/etc/kubernetes/admin.conf
  • 初始化成功以后显示如下

解释:要开始使用您的集群,您需要以普通用户身份运行以下命令

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果您是 root 用户,可以运行:

export KUBECONFIG=/etc/kubernetes/admin.conf

您现在可以通过在每个控制平面节点上以 root 用户身份运行以下命令来加入任意数量的控制平面节点:

kubeadm join 192.168.1.100:8443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:3fcd0d0ac88c9a4f1321f6d15cb484b8f67b1492c10282f5faa3070b5741635f \  --control-plane --certificate-key bf521ccd59a5d33a2d8370e0ae9f10b7f00db3412f1c066aafd0e516c80664ae

请注意,certificate-key 提供对集群敏感数据的访问权限,请保密!为了安全起见,上传的证书将在两个小时后被删除;如果需要,您可以使用 "kubeadm init phase upload-certs --upload-certs" 在之后重新加载证书。

然后,您可以通过在每个工作节点上以 root 用户身份运行以下命令来加入任意数量的工作节点:

kubeadm join 192.168.1.100:8443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:3fcd0d0ac88c9a4f1321f6d15cb484b8f67b1492c10282f5faa3070b5741635f

5.3 部署网络插件(Calico)

下载地址:https://github.com/projectcalico/calico/blob/v3.31.3/manifests/calico-etcd.yaml 下载好以后修改配置

# 添加etcd 节点sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379"#g' calico-etcd.yaml# 添加证书ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml# 添加证书路径sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml# 修改pod网段地址POD_SUBNET="10.244.0.0/16"sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

全部修改好以后检查一遍没问题就可以部署了

kubectl create -f calico-etcd.yaml

部署成功以后再次查看集群状态就没问题了

[root@master01 ~]# kubectl get node NAME       STATUS     ROLES           AGE     VERSIONmaster01   Ready    control-plane   25m   v1.35.0master02   Ready    control-plane   24m   v1.35.0master03   Ready    control-plane   23m   v1.35.0node01     Ready                    24m   v1.35.0node02     Ready                    23m   v1.35.0

5.4 部署 Metrics Server

安装之前需要删除污点

kubectl taint node --all   node-role.kubernetes.io/control-plane:NoSchedule-

这是官方配置文件,直接拿来用会提示缺少证书:https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

以下为修改添加证书相关路径添加挂在点等等 证书文件路径为/etc/kubernetes/pki/front-proxy-ca.crt(部署集群时自动生成的证书)

在安装Metrics

cat > ./components.yaml << EapiVersion: v1kind: ServiceAccountmetadata:  labels:    k8s-app: metrics-server  name: metrics-server  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    k8s-app: metrics-server    rbac.authorization.k8s.io/aggregate-to-admin: "true"    rbac.authorization.k8s.io/aggregate-to-edit: "true"    rbac.authorization.k8s.io/aggregate-to-view: "true"  name: system:aggregated-metrics-readerrules:- apiGroups:  - metrics.k8s.io  resources:  - pods  - nodes  verbs:  - get  - list  - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    k8s-app: metrics-server  name: system:metrics-serverrules:- apiGroups:  - ""  resources:  - nodes/metrics  verbs:  - get- apiGroups:  - ""  resources:  - pods  - nodes  verbs:  - get  - list  - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:  labels:    k8s-app: metrics-server  name: metrics-server-auth-reader  namespace: kube-systemroleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount  name: metrics-server  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  labels:    k8s-app: metrics-server  name: metrics-server:system:auth-delegatorroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:auth-delegatorsubjects:- kind: ServiceAccount  name: metrics-server  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  labels:    k8s-app: metrics-server  name: system:metrics-serverroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:metrics-serversubjects:- kind: ServiceAccount  name: metrics-server  namespace: kube-system---apiVersion: v1kind: Servicemetadata:  labels:    k8s-app: metrics-server  name: metrics-server  namespace: kube-systemspec:  ports:  - appProtocol: https    name: https    port: 443    protocol: TCP    targetPort: https  selector:    k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata:  labels:    k8s-app: metrics-server  name: metrics-server  namespace: kube-systemspec:  selector:    matchLabels:      k8s-app: metrics-server  strategy:    rollingUpdate:      maxUnavailable: 0  template:    metadata:      labels:        k8s-app: metrics-server    spec:      containers:      - args:        - --cert-dir=/tmp        - --secure-port=10250        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname        - --kubelet-use-node-status-port        - --metric-resolution=15s        - --kubelet-insecure-tls        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt         - --requestheader-username-headers=X-Remote-User        - --requestheader-group-headers=X-Remote-Group        - --requestheader-extra-headers-prefix=X-Remote-Extra-        image: registry.k8s.io/metrics-server/metrics-server:v0.8.0        imagePullPolicy: IfNotPresent        livenessProbe:          failureThreshold: 3          httpGet:            path: /livez            port: https            scheme: HTTPS          periodSeconds: 10        name: metrics-server        ports:        - containerPort: 10250          name: https          protocol: TCP        readinessProbe:          failureThreshold: 3          httpGet:            path: /readyz            port: https            scheme: HTTPS          initialDelaySeconds: 20          periodSeconds: 10        resources:          requests:            cpu: 100m            memory: 200Mi        securityContext:          allowPrivilegeEscalation: false          capabilities:            drop:            - ALL          readOnlyRootFilesystem: true          runAsNonRoot: true          runAsUser: 1000          seccompProfile:type: RuntimeDefault        volumeMounts:        - mountPath: /tmp          name: tmp-dir        - mountPath: /etc/kubernetes/pki          name: k8s-certs      nodeSelector:        kubernetes.io/os: linux      priorityClassName: system-cluster-critical      serviceAccountName: metrics-server      volumes:      - emptyDir: {}        name: tmp-dir      - hostPath:          path: /etc/kubernetes/pki        name: k8s-certs---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:  labels:    k8s-app: metrics-server  name: v1beta1.metrics.k8s.iospec:  group: metrics.k8s.io  groupPriorityMinimum: 100  insecureSkipTLSVerify: true  service:    name: metrics-server    namespace: kube-system  version: v1beta1  versionPriority: 100Ekubectl create -f components.yaml

5.5 将 kube-proxy 切换到 ipvs 模式

kubectl edit cm kube-proxy -n kube-system# 将 mode 修改为 "ipvs"

更新Kube-Proxy的Pod:

kubectl patch daemonset kube-proxy -n kube-system -p '{"spec":{"template":{"metadata":{"annotations":{"date":"$(date +'%s')"}}}}}'

验证模式:

curl 127.0.0.1:10249/proxyMode# 应显示 ipvs

5.6 其它工具、Ingress、Storage ,Gateway PAI 等等

注意:下面部分相关组件都是一年以前的老版本  如果需要新版本直接在官网下载最新版本安装即可  安装方法可以参考我的教程

安装后端存储  NFS共享存储 ceph存储

ingress控制器安装  Gateway API

5.7 命令补全

yum -y install bash-completionsource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo"source <(kubectl completion bash)" >> ~/.bashrc#加载bash-completionsource /etc/profile.d/bash_completion.sh   

六、注意事项

  • kubeadm 默认签发的证书有效期为 一年,生产环境可考虑延长或设置自动更新。
  • 控制平面组件(kube-apiserver、controller-manager、scheduler、etcd)以静态 Pod 方式运行,配置文件在 /etc/kubernetes/manifests;更改后 kubelet 会自动重启对应 Pod。
  • kubelet 的配置在 /etc/sysconfig/kubelet 和 /var/lib/kubelet/config.yaml
  • 默认情况下 control-plane/master 节点有污点,不允许调度普通 Pod;如需在 master 上部署 Pod,需要移除污点:
## 查看污点kubectl describe node  | grep Taint## 删除污点kubectl taint node --all node-role.kubernetes.io/control-plane:NoSchedule-

七、安装 Kuboard(管理平台 可选)

  • 官网:Kuboard(支持在线及离线安装)
  • 安装命令:
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

八、验证集群状态

8.1 查看节点状态

kubectl get nodes

确保所有节点状态为 Ready,角色分配符合预期(例如 control-plane、worker 等)。

8.2 查看系统组件 Pod 状态

kubectl get pods -n kube-system

核心服务(如 CoreDNS、kube-proxy、calico/node 等)应为 Running

8.3 确认控制面组件健康状态

kubectl get componentstatuses

(注意:kubectl get cs/componentstatuses 在新版 Kubernetes 中已被标注为弃用,但仍可用于基本诊断)

8.4 查看集群信息

kubectl cluster-info

这会显示 API Server、DNS 等服务的地址,确保它们都处于可访问状态。

8.5 API Server 健康检查

  • 使用 readiness probe 查看 API 的健康状态:

    kubectl get --raw='/readyz?verbose'

    返回 ok 表示 API 已准备就绪并可以处理请求。

8.6 CNI 网络检查

  • 核心 DNS 能否正常解析: 部署一个临时 Pod(如 dnsutils 或 busybox),然后执行 nslookup:
cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:  name: busybox  namespace: defaultspec:  containers:  - name: busybox    image: docker.io/library/busybox:1.28command:      - sleep      - "3600"    imagePullPolicy: IfNotPresent  restartPolicy: AlwaysEOFkubectl exec -ti busybox  -- nslookup kubernetes.default

能解析成功说明网络正常。

8.7 应用测试

  • 部署测试 Pod / Deployment

    kubectl apply -f https://k8s.io/examples/application/deployment.yamlkubectl get pods

    查看是否能正常创建并运行。

  • 暴露服务

    kubectl expose deployment nginx-deployment --port=80 --type=NodePort

    访问节点 IP + 分配的 NodePort,确保服务可访问。

8.8 Resource Metrics 测试(可选)

如果你已安装 Metrics Server:

kubectl top nodeskubectl top pods -n kube-system

这些命令能返回节点和 Pod 的 CPU/内存使用情况,说明 Metrics API 正常工作。

8.9 事件查看与调试

  • 使用以下命令查看系统事件日志,如果有资源调度或服务启动失败等问题,可以及时发现原因:

    kubectl get events --sort-by='.metadata.creationTimestamp'
  • 也可以导出集群状态供诊断:

    kubectl cluster-info dump

九、总结

本文详细介绍了如何在 Rocky Linux 系统上部署一个高可用的 Kubernetes 集群,包括:

  1. 系统环境准备:网络配置、防火墙设置、时间同步等
  2. 内核优化:ipvs 配置、内核参数调整
  3. 容器运行时:containerd 安装与配置
  4. 高可用组件:HAProxy + Keepalived 配置
  5. Kubernetes 组件:kubeadm 初始化、证书管理
  6. 网络插件:Calico 部署与配置
  7. 监控组件:Metrics Server 安装
  8. 集群验证:节点状态、网络连通性、应用部署测试

通过本文的步骤,你可以搭建一个功能完整、高可用的 Kubernetes 集群,为你的应用提供稳定可靠的运行环境。

如果你在安装过程中遇到任何问题,欢迎在评论区留言,我会尽力帮助你解决。同时,也欢迎分享你的安装经验和优化建议!


小贴士:定期备份集群配置和 etcd 数据,确保在发生故障时能够快速恢复集群状态。

最新文章

随机文章

基本 文件 流程 错误 SQL 调试
  1. 请求信息 : 2026-03-01 01:09:28 HTTP/2.0 GET : https://f.mffb.com.cn/a/475403.html
  2. 运行时间 : 0.079257s [ 吞吐率:12.62req/s ] 内存消耗:4,858.97kb 文件加载:140
  3. 缓存信息 : 0 reads,0 writes
  4. 会话信息 : SESSION_ID=c0e5fcdd711322b3c375c6e7fba6428b
  1. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/public/index.php ( 0.79 KB )
  2. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/autoload.php ( 0.17 KB )
  3. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/composer/autoload_real.php ( 2.49 KB )
  4. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/composer/platform_check.php ( 0.90 KB )
  5. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/composer/ClassLoader.php ( 14.03 KB )
  6. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/composer/autoload_static.php ( 4.90 KB )
  7. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/helper.php ( 8.34 KB )
  8. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-validate/src/helper.php ( 2.19 KB )
  9. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/helper.php ( 1.47 KB )
  10. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/stubs/load_stubs.php ( 0.16 KB )
  11. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Exception.php ( 1.69 KB )
  12. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-container/src/Facade.php ( 2.71 KB )
  13. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/symfony/deprecation-contracts/function.php ( 0.99 KB )
  14. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/symfony/polyfill-mbstring/bootstrap.php ( 8.26 KB )
  15. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/symfony/polyfill-mbstring/bootstrap80.php ( 9.78 KB )
  16. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/symfony/var-dumper/Resources/functions/dump.php ( 1.49 KB )
  17. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-dumper/src/helper.php ( 0.18 KB )
  18. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/symfony/var-dumper/VarDumper.php ( 4.30 KB )
  19. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/App.php ( 15.30 KB )
  20. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-container/src/Container.php ( 15.76 KB )
  21. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/psr/container/src/ContainerInterface.php ( 1.02 KB )
  22. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/provider.php ( 0.19 KB )
  23. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Http.php ( 6.04 KB )
  24. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/helper/Str.php ( 7.29 KB )
  25. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Env.php ( 4.68 KB )
  26. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/common.php ( 0.03 KB )
  27. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/helper.php ( 18.78 KB )
  28. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Config.php ( 5.54 KB )
  29. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/app.php ( 0.95 KB )
  30. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/cache.php ( 0.78 KB )
  31. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/console.php ( 0.23 KB )
  32. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/cookie.php ( 0.56 KB )
  33. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/database.php ( 2.48 KB )
  34. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/facade/Env.php ( 1.67 KB )
  35. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/filesystem.php ( 0.61 KB )
  36. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/lang.php ( 0.91 KB )
  37. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/log.php ( 1.35 KB )
  38. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/middleware.php ( 0.19 KB )
  39. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/route.php ( 1.89 KB )
  40. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/session.php ( 0.57 KB )
  41. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/trace.php ( 0.34 KB )
  42. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/config/view.php ( 0.82 KB )
  43. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/event.php ( 0.25 KB )
  44. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Event.php ( 7.67 KB )
  45. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/service.php ( 0.13 KB )
  46. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/AppService.php ( 0.26 KB )
  47. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Service.php ( 1.64 KB )
  48. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Lang.php ( 7.35 KB )
  49. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/lang/zh-cn.php ( 13.70 KB )
  50. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/initializer/Error.php ( 3.31 KB )
  51. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/initializer/RegisterService.php ( 1.33 KB )
  52. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/services.php ( 0.14 KB )
  53. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/service/PaginatorService.php ( 1.52 KB )
  54. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/service/ValidateService.php ( 0.99 KB )
  55. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/service/ModelService.php ( 2.04 KB )
  56. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-trace/src/Service.php ( 0.77 KB )
  57. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Middleware.php ( 6.72 KB )
  58. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/initializer/BootService.php ( 0.77 KB )
  59. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/Paginator.php ( 11.86 KB )
  60. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-validate/src/Validate.php ( 63.20 KB )
  61. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/Model.php ( 23.55 KB )
  62. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/Attribute.php ( 21.05 KB )
  63. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/AutoWriteData.php ( 4.21 KB )
  64. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/Conversion.php ( 6.44 KB )
  65. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/DbConnect.php ( 5.16 KB )
  66. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/ModelEvent.php ( 2.33 KB )
  67. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/concern/RelationShip.php ( 28.29 KB )
  68. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/contract/Arrayable.php ( 0.09 KB )
  69. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/contract/Jsonable.php ( 0.13 KB )
  70. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/model/contract/Modelable.php ( 0.09 KB )
  71. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Db.php ( 2.88 KB )
  72. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/DbManager.php ( 8.52 KB )
  73. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Log.php ( 6.28 KB )
  74. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Manager.php ( 3.92 KB )
  75. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/psr/log/src/LoggerTrait.php ( 2.69 KB )
  76. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/psr/log/src/LoggerInterface.php ( 2.71 KB )
  77. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Cache.php ( 4.92 KB )
  78. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/psr/simple-cache/src/CacheInterface.php ( 4.71 KB )
  79. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/helper/Arr.php ( 16.63 KB )
  80. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/cache/driver/File.php ( 7.84 KB )
  81. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/cache/Driver.php ( 9.03 KB )
  82. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/contract/CacheHandlerInterface.php ( 1.99 KB )
  83. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/Request.php ( 0.09 KB )
  84. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Request.php ( 55.78 KB )
  85. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/middleware.php ( 0.25 KB )
  86. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Pipeline.php ( 2.61 KB )
  87. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-trace/src/TraceDebug.php ( 3.40 KB )
  88. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/middleware/SessionInit.php ( 1.94 KB )
  89. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Session.php ( 1.80 KB )
  90. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/session/driver/File.php ( 6.27 KB )
  91. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/contract/SessionHandlerInterface.php ( 0.87 KB )
  92. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/session/Store.php ( 7.12 KB )
  93. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Route.php ( 23.73 KB )
  94. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/RuleName.php ( 5.75 KB )
  95. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/Domain.php ( 2.53 KB )
  96. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/RuleGroup.php ( 22.43 KB )
  97. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/Rule.php ( 26.95 KB )
  98. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/RuleItem.php ( 9.78 KB )
  99. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/route/app.php ( 1.72 KB )
  100. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/facade/Route.php ( 4.70 KB )
  101. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/dispatch/Controller.php ( 4.74 KB )
  102. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/route/Dispatch.php ( 10.44 KB )
  103. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/controller/Index.php ( 4.81 KB )
  104. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/app/BaseController.php ( 2.05 KB )
  105. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/facade/Db.php ( 0.93 KB )
  106. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/connector/Mysql.php ( 5.44 KB )
  107. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/PDOConnection.php ( 52.47 KB )
  108. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/Connection.php ( 8.39 KB )
  109. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/ConnectionInterface.php ( 4.57 KB )
  110. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/builder/Mysql.php ( 16.58 KB )
  111. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/Builder.php ( 24.06 KB )
  112. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/BaseBuilder.php ( 27.50 KB )
  113. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/Query.php ( 15.71 KB )
  114. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/BaseQuery.php ( 45.13 KB )
  115. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/TimeFieldQuery.php ( 7.43 KB )
  116. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/AggregateQuery.php ( 3.26 KB )
  117. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/ModelRelationQuery.php ( 20.07 KB )
  118. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/ParamsBind.php ( 3.66 KB )
  119. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/ResultOperation.php ( 7.01 KB )
  120. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/WhereQuery.php ( 19.37 KB )
  121. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/JoinAndViewQuery.php ( 7.11 KB )
  122. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/TableFieldInfo.php ( 2.63 KB )
  123. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-orm/src/db/concern/Transaction.php ( 2.77 KB )
  124. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/log/driver/File.php ( 5.96 KB )
  125. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/contract/LogHandlerInterface.php ( 0.86 KB )
  126. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/log/Channel.php ( 3.89 KB )
  127. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/event/LogRecord.php ( 1.02 KB )
  128. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-helper/src/Collection.php ( 16.47 KB )
  129. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/facade/View.php ( 1.70 KB )
  130. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/View.php ( 4.39 KB )
  131. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Response.php ( 8.81 KB )
  132. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/response/View.php ( 3.29 KB )
  133. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/Cookie.php ( 6.06 KB )
  134. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-view/src/Think.php ( 8.38 KB )
  135. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/framework/src/think/contract/TemplateHandlerInterface.php ( 1.60 KB )
  136. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-template/src/Template.php ( 46.61 KB )
  137. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-template/src/template/driver/File.php ( 2.41 KB )
  138. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-template/src/template/contract/DriverInterface.php ( 0.86 KB )
  139. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/runtime/temp/067d451b9a0c665040f3f1bdd3293d68.php ( 11.98 KB )
  140. /yingpanguazai/ssd/ssd1/www/f.mffb.com.cn/vendor/topthink/think-trace/src/Html.php ( 4.42 KB )
  1. CONNECT:[ UseTime:0.000468s ] mysql:host=127.0.0.1;port=3306;dbname=f_mffb;charset=utf8mb4
  2. SHOW FULL COLUMNS FROM `fenlei` [ RunTime:0.000731s ]
  3. SELECT * FROM `fenlei` WHERE `fid` = 0 [ RunTime:0.000305s ]
  4. SELECT * FROM `fenlei` WHERE `fid` = 63 [ RunTime:0.000251s ]
  5. SHOW FULL COLUMNS FROM `set` [ RunTime:0.000480s ]
  6. SELECT * FROM `set` [ RunTime:0.000197s ]
  7. SHOW FULL COLUMNS FROM `article` [ RunTime:0.000515s ]
  8. SELECT * FROM `article` WHERE `id` = 475403 LIMIT 1 [ RunTime:0.000453s ]
  9. UPDATE `article` SET `lasttime` = 1772298568 WHERE `id` = 475403 [ RunTime:0.000675s ]
  10. SELECT * FROM `fenlei` WHERE `id` = 67 LIMIT 1 [ RunTime:0.000216s ]
  11. SELECT * FROM `article` WHERE `id` < 475403 ORDER BY `id` DESC LIMIT 1 [ RunTime:0.000415s ]
  12. SELECT * FROM `article` WHERE `id` > 475403 ORDER BY `id` ASC LIMIT 1 [ RunTime:0.000509s ]
  13. SELECT * FROM `article` WHERE `id` < 475403 ORDER BY `id` DESC LIMIT 10 [ RunTime:0.001163s ]
  14. SELECT * FROM `article` WHERE `id` < 475403 ORDER BY `id` DESC LIMIT 10,10 [ RunTime:0.000921s ]
  15. SELECT * FROM `article` WHERE `id` < 475403 ORDER BY `id` DESC LIMIT 20,10 [ RunTime:0.006487s ]
0.080772s