Kubernetes High Available Cluster
Leveraging containers is great for so many reasons, but Kubernetes can go beyond that to perform the orchestration of ensuring the workloads are running as desired. Beyond that our journey required moving to a Highly Available (HA) Kubernetes cluster.
Start with a single node cluster
While a single node can be used as the control plane and to run the workloads, it can logically be expanded by separating the control node from the worker node. Further expansion can be done by adding in 1+ additional worker nodes managed by the same control plane.
K3s has been a great Kubernetes distribution for local working. It has the needed aspects of Kubernetes without all the cloud support that isn't needed locally to tie into AWS, GCP or Azure.
Perform some maintenance on the control plane and find out that there's dependencies on the control plane you may not have counted on. It creates the demand for a HA control plane provided by scaling from one to three control nodes.
Creating a multi node cluster and hardware needs
Changing from a single control node to multiple control nodes requires nodes that are backed by an etcd database. They share information with each other on the current status of the nodes and the workloads. This can create lots of file system churn. File system churn requires upgrading from an SD card to an SSD drive.
Software needs of rolling out multiple updates with automation
At some point moving from 1 to 2, 3, 4 or more nodes become more time intensive and can benefit from automation. Ansible playbooks are a great way to configure the software and leverage Infrastructure as Code (IaC) best practices.
Recent lab experimentation (since 2020)
Lab descriptions and details currently not available online, please reach out to discuss.