A total newbie to K8s. The intention was to bring up a multi-master cluster – 3 master nodes, 3 worker nodes, 1 HA Proxy node. Read articles on the pros and cons of deploying on Baremetal servers and VMs Hypervisors. Finally decided to try out bring up the cluster with nodes hosted on KVM.
The lab includes a UTM device, which includes a DHCP server, A managed switch, and 3 Dell Servers with a good amount of resources.
Have no idea about sizing, the need is to bring up a lab. Given the same and resources being available planning to go with the following resource allocation. MariaDB will be used for my application DB needs (Later will be adding ArangoDB – not required in the initial stages of application development).
Master/API Nodes : 4 Core / 8 vCPU / 24 Gig RAM / 200 G disk space Worker Nodes : 16 Core / 32 vCPU / 96 Gig RAM / 200 G disk space plus 1 TB for persistence volume HA Proxy : 2 Core / 4 vCPU / 12 Gig RAM / 100 G disk space DB Nodes : 4 Core / 8 vCPU / 24 Gig RAM / 300 G disk space
Part 1 : Install host OS, Configure NIC Bonding, Install KVM
DHCP server entries (A records – domain information masked)
[ For VMs hosted on Server 1 ] 10.1.1.32 db1.<domain> 10.1.1.33 api1.<domain> 10.1.1.34 worker1.<domain> 10.1.1.35 proxy.<domain> [ For VMs hosted on Server 2 ] 10.1.1.42 db2.<domain> 10.1.1.43 api2.<domain> 10.1.1.44 worker2.<domain> [ For VMs hosted on Sever 3 ] 10.1.1.52 db1.<domain> 10.1.1.53 api3.<domain> 10.1.1.54 worker3.<domain>
Gateway / DHCP server : 10.1.1.1
Install Debian Buster 10.7 – server installaion, selecting only SSH server and standard utilities during package selection.
Edit /etc/apt/sources.list and comment the CD ROM source. Optionally add non-free after ‘buster main’ so that non-free packages can be installed.
Optional requirement for my lab :
apt install -y firmware-linux
Private lab and behind UTM device so can disable apparmor
systemctl stop apparmor systemctl disable apparmor
Also permit root login via ssh – Uncomment and update the following configurations in /etc/ssh/sshd_config
PermitRootLogin yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
Restart sshd service
systemctl restart sshd.service
It is a standard practice of mine to add the following lines at the end of /etc/security/limits.conf on all installation
* soft nofiles 65536 * hard nofiles 65536 * soft nproc 65536 * hard nproc 65536
Comment out the CDROM source in /etc/apt/sources.list and set the timezone
# timedatectl set-timezone Asia/Kolkata
Install chrony (NTP) – Add ‘allow 10.1.1.0/24’ at the end of /etc/chrony/chrony.conf so that the guest VMs can synch with the host.
# apt install chrony -y
Install ifenslave required for bonding
# apt install ifenslave -y
Before configuring NIC bonding let us install required tools for bridging interfaces. One interface (eno1) will be bridged for KVM needs.
# apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virtinst libvirt-daemon virt-manager -y
Make the default network for KVM Vms autostarted across reboots
# virsh net-start default # virsh net-autostart default
Offload the mechanism of “virtio-net” to improve the performance of KVM. Add “vhost_net” kernel module
# modprobe vhost_net # echo "vhost_net" | tee -a /etc/modules # lsmod | grep vhost vhost_net 24576 0 tun 49152 2 vhost_net vhost 49152 1 vhost_net tap 28672 1 vhost_net
Each server has 4 x 1 Gig interfaces (en1, eno2, eno3, and en04). The plan is to use eno1 in bridged mode for VM management, eno2 for database clustering. eno3 and eno4 will be bonded as bond0 which will be used for the K8s cluster. Note that we need to configure LAG at the switch end. In my case it is Cisco SG-300 52 Port and have configured LAG as shown below :
# modprobe bonding # echo "bonding" >> /etc/modules
Update the /etc/network/interfaces with required bonding configurations.
Contents of /etc/network/interfaces from server 1
#This file describes the network interfaces available on your #system and how to activate them. For more information, see #interfaces(5). source /etc/network/interfaces.d/* #The loopback network interface auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto br1 iface br1 inet static address 10.1.1.30/24 gateway 10.1.1.1 dns-nameservers 10.1.1.1 220.127.116.11 dns-search <domain> bridge_stp off bridge_ports eno1 auto bond0 iface bond0 inet static address 10.1.3.3 netmask 255.255.255.0 broadcast 10.1.1.255 slaves eno3 eno4 bond_mode 6 bond-miimon 100 bond_downdelay 200 bond_updelay 200 auto eno2 iface eno2 inet static address 10.1.2.3/24 auto eno3 iface eno3 inet manual bond master bond0 auto eno4 iface eno4 inet manual bond master bond0
Added the following to /etc/sysctl.conf
net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0
Added the following to /etc/sysctl.conf
net.ipv4.ip_forward = 1
Reboot the server once, a standard practice of mine.