MENU

Build your K8s. Step-By-Step. [With HA].

China1x.png

This is a page which should not exist until my unfortunately type kubeadm reset on my single master node however my planning is to expanding my cluster worker node. Which reminded me should design master node for the HA at very first time for setting up.

So this is a brand new start. With Kubesphere, S3 as storage, Calico and so on...

1. Design for Kubernetes.

Ansible

Use ansible for multi control devices.

Save time. Save life.

Multi-Master node and HA

Thus data, program, your time will not lost even if one master node has been reset.

Using load-balancer to build up using External etcd topology.

Depolyment

Use KubeKey for setup Kubenetes.

Network

Using Calico for basis network infra.

LoadBalance for Kubernetes Network

  • Using metallb for external IP/Service LoadBalance Management.

Storage

Use csi-s3 as storage.

Using [FreeNAS NFS](https://github.com/democratic-csi/democratic-csi/blob/master/examples/freenas-api-nfs.yaml as storage.

Management

Kubesphere has a comprehansive interfaces for monitor/control/display like secrets,pods,depolyment...

Software

Rabbitmq , Elasticsearch ...

2. Preparation

2.1 Network Preparation

I am planning to use 172.16.0.0/22 for setup this K8s Network.

  1. 172.16.0.2-32 is for HA Node. And 172.16.0.33 for masterIP.
  2. Master Node is 172.16.1.1-172.16.1-32.
  3. Worker Node is 172.16.2.1-172.16.2-256. And can be extended for future usage.

2.2 HA Node for Keepalived and HAProxy

  • Node 1:
    • Hostname: HA-1
    • IP Address: 172.16.0.2/22
  • Node 2:
    • Hostname: HA-2
    • IP Address: 172.16.0.3/22
    • Keepalive Address: 172.16.0.33

2.2.1 Install HAProxy and Keepalived

apt install -y keepalived haproxy psmisc
  • Edit HAProxy Config at /etc/haproxy/haproxy.cfg
global
    log /dev/log    local0 warning
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
    log    global
    mode    http
    option    httplog
    option    dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend kube-apiserver
  bind *:6443
  mode tcp
  option tcplog
  default_backend kube-apiserver


backend kube-apiserver
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server kube-apiserver-1 172.16.1.1:6443 check
    server kube-apiserver-2 172.16.1.2:6443 check
    server kube-apiserver-3 172.18.1.3:6443 check

Also don't forget to configure on other HA node.

  • Resart and set start on boot
systemctl restart haproxy
systemctl enable haproxy
  • Edit Keepalived Config at /etc/keepalived/keepalived.conf
global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance haproxy-vip {
  state BACKUP
  priority 100
  interface eth1                       # Network card
  virtual_router_id 60
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111                #PASSWORD FOR Keepalived
  }
  unicast_src_ip 172.16.0.2
  unicast_peer {
    172.18.0.3
  }

  virtual_ipaddress {
    172.18.0.33/22
  }

  track_script {
    chk_haproxy
  }
}

Also don't forget to configure on other HA node.

  • Resart and set start on boot
systemctl restart keepalived
systemctl enable keepalived

2.3 Setup Worker Node and Master Node

Setup Node With Kubekey

./kk create cluster -f  config.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master1, address: 172.16.1.1, internalAddress: 172.16.1.1,  privateKeyPath: "~/.ssh/id_rsa"}
  - {name: master2, address: 172.16.1.2, internalAddress: 172.16.1.2,  privateKeyPath: "~/.ssh/id_rsa"}
  - {name: master3, address: 172.16.1.3, internalAddress: 172.16.1.3,  privateKeyPath: "~/.ssh/id_rsa"}
  - {name: worker1, address: 172.16.2.1, internalAddress: 172.16.2.1,  privateKeyPath: "~/.ssh/id_rsa"}
  - {name: worker2, address: 172.16.2.2, internalAddress: 172.16.2.2,  privateKeyPath: "~/.ssh/id_rsa"}
  roleGroups:
    etcd:
    - master1
    - master2
    - master3
    control-plane:
    - master1
    - master2
    - master3
    worker:
    - worker1
    - worker2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: 172.16.0.33
    port: 6443
  kubernetes:
    version: v1.23.10
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.0.0/16
    kubeServiceCIDR: 10.234.0.0/16
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry:
    namespaceOverride:
    registryMirrors: []
    insecureRegistries: []
  addons: []

Setup Worker/Master Node

sudo apt install socat conntrack

Setup Cluster Node

kk create cluster -f  config.yaml --with-kubesphere

Awaiting configuration and done!

Reset command

If something was not right need to reset the k8s master/worker node. Use below information to reset the node and reboot.

kubeadm reset -f

rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*

rm -f /etc/etcd.env

iptables -F && iptables -X iptables -t nat -F && iptables -t nat -X iptables -t raw -F && iptables -t raw -X iptables -t mangle -F && iptables -t mangle -X

systemctl restart containerd


3. Setup LoadBalancer

3.1 Use metallb as LoadBalancer

Reference: https://metallb.universe.tf/installation/

Preparation

If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you have to enable strict ARP mode.

Note, you don’t need this if you’re using kube-router as service-proxy because it is enabling strict ARP by default.

You can achieve this by editing kube-proxy config in current cluster:

kubectl edit configmap -n kube-system kube-proxy

and set:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

Installation By Manifest

To install MetalLB, apply the manifest:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml

If you want to deploy MetalLB using the FRR mode, apply the manifests:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-frr.yaml

Please do note that these manifests deploy MetalLB from the main development branch. We highly encourage cloud operators to deploy a stable released version of MetalLB on production environments!

Network Define

  • AddressPool

Notice

Please follow the Official AddressPool Configuration of Metallb https://metallb.universe.tf/configuration/_advanced_ipaddresspool_configuration to configure.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cheap
  namespace: metallb-system
spec:
  addresses:
  - 192.168.10.0/24
  • Define Layer2 Configuration

Notice

Please refer to the link below and configure it at bgp mode or at special circumstance.

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
  nodeSelectors:
  - matchLabels:
      kubernetes.io/hostname: NodeA
  - matchLabels:
      kubernetes.io/hostname: NodeB

## 4. Setup CSI-S3 for Storage

### 4.1 Create a secret with your S3 credentials

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: csi-s3-secret
  # Namespace depends on the configuration in the storageclass.yaml
  namespace: kube-system
stringData:
  accessKeyID: <YOUR_ACCESS_KEY_ID>
  secretAccessKey: <YOUR_SECRET_ACCES_KEY>
  # For AWS set it to "https://s3.<region>.amazonaws.com"
  endpoint: <S3_ENDPOINT_URL>
  # If not on S3, set it to ""
  region: <S3_REGION>

### 4.2 Deploy the driver

cd deploy/kubernetes
kubectl create -f provisioner.yaml
kubectl create -f attacher.yaml
kubectl create -f csi-s3.yaml

### 4.3 Create the storage class

kubectl create -f examples/storageclass.yaml

### 4.4 Test the S3 driver

  1. Create a pvc using the new storage class:

    kubectl create -f examples/pvc.yaml
  2. Check if the PVC has been bound:

    $ kubectl get pvc csi-s3-pvc
    NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    csi-s3-pvc   Bound     pvc-c5d4634f-8507-11e8-9f33-0e243832354b   5Gi        RWO            csi-s3         9s

5. Using freenas-api-nfs as storage-class

edit csi-nfs.yaml

csiDriver:
  # should be globally unique for a given cluster
  name: "org.democratic-csi.nfs"

storageClasses:
- name: freenas-nfs-csi
  defaultClass: false
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs

  mountOptions:
  - noatime
  - nfsvers=4
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

volumeSnapshotClasses: []

driver:
  config:
    driver: freenas-api-nfs
    instance_id:
    httpConnection:
      protocol: https
      host: YOURFREENAS_IP
      port: 443
  # use only 1 of apiKey or username/password
  # if both are present, apiKey is preferred
  # apiKey is only available starting in TrueNAS-12
      apiKey: YOURAPIKEY
      allowInsecure: true
      #apiVersion: 2
    zfs:
  # can be used to override defaults if necessary
  # the example below is useful for TrueNAS 12
  #cli:
  #  sudoEnabled: true
  #
  #  leave paths unset for auto-detection
  #  paths:
  #    zfs: /usr/local/sbin/zfs
  #    zpool: /usr/local/sbin/zpool
  #    sudo: /usr/local/bin/sudo
  #    chroot: /usr/sbin/chroot

  # can be used to set arbitrary values on the dataset/zvol
  # can use handlebars templates with the parameters from the storage class/CO
  #datasetProperties:
  #  "org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
  #  "org.freenas:test": "{{ parameters.foo }}"
  #  "org.freenas:test2": "some value"

      datasetParentName: NVME-Sto/k8s/nfs
  # do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
  # they may be siblings, but neither should be nested in the other
  # do NOT comment this option out even if you don't plan to use snapshots, just leave it with dummy value
      detachedSnapshotsDatasetParentName: NVME-Sto/k8s/nfs-snapshot
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 0
      datasetPermissionsGroup: 0

  # not supported yet
  #datasetPermissionsAcls:
  #- "-m everyone@:full_set:allow"
  #- "-m u:kube:full_set:allow"

    nfs:
  #shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      shareHost: YOURFREENAS_IP
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: "root"
      shareMaprootGroup: k8s-group
      shareMapallUser: ""
      shareMapallGroup: ""
helm upgrade \
--install \
--create-namespace \
--values ./csi-nfs.yaml \
--namespace democratic-csi \
zfs-nfs democratic-csi/democratic-csi
Last Modified: September 20, 2023