00 - Kubernetes - Getting Started
Introduction
Read the Kubernetes Official Docs for a deeper explanation.
Kubernetes is a container orchestrator engine. What this means is that it manages all the infrastructure resources involved in a microservice deployment, such as networking, number of replicas, access to their endpoints, load balancing requests, etc… Its main purpose is to automate the deployment, scaling and management of containerized applications.
It can be daunting and complex due to the extension of its features, so we’ll try to approach it in the most simple and maintainable way.
Main concepts
Kubernetes components: A Kubernetes cluster is mainly composed of the following components:
- Control Plane components: these make global decisions about the cluster (scheduling pods) as well as detecting and responding to cluster events
- They can be run on any machine in the cluster; however, for simplicity, set up scripts typically start all control plane components on the same machine and do not run user containers there
- Some of the common control plane components are: kube-apiserver, etcd, kube-scheduler, kube-control-manager and cloud-control-manager
- Node components: these run on every node, maintaining running pods and providing the Kubernetes runtime environment
- Some of the common node components are: kubelet, kube-proxy and container runtime
- Addons: they use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
- Because these are providing cluster-level features, namespaced resources for addons belong within the
kube-system
namespace - Selected addons are DNS, Web UI (Dashboard), Container Resource Monitoring and Cluster-level Logging
- Because these are providing cluster-level features, namespaced resources for addons belong within the
As described above, usually all control plane components are deployed into the same machine or node. Another usual thing to do is to isolate the control plane nodes to prevent them to act as workers; this ensures that their critical mission of overseeing the whole cluster isn’t compromised due to a highly demanding user container.
When dealing with High-Availability, we’ll configure more than one control plane node to ensure redundancy.
Requirements
In order for us to easily deploy our first Kubernetes cluster, we’ll rely on TechnoTim’s guide to deploy k3s with Ansible playbooks. This guide guarantees a 100% automated k3s deployment, so we’ll give it a try.
For this guide, we’ll need the following resources:
A central machine from where we’ll launch Ansible playbooks and access our k3s cluster:
- Ansible on Debian/Ubuntu distros: Installation process
- Ansible on Alpine: Installation process
We’ll be using an LXC with Alpine installed (128MB of RAM and 2GB of disk)
Accessing LXC through serial console attachment is a pain. When creating it in Proxmox, we had the chance to add our SSH key, but OpenSSH isn’t installed by default in Alpine images. Attach to the LXC and do the following:
Check that your public key is already shared with the container:
- At least 5 VMs for our cluster (2 CPU, 2GB of RAM and at least 5GB of disk):
- 3 for control plane:
- k3s-control-1
- k3s-control-2
- k3s-control-3
- 2 for workers:
- k3s-worker-1
- k3s-worker-2
- 3 for control plane:
Ansible configuration
Follow the guide’s instructions, clone its git
repository and make sure you update the following files. Make a copy of the /inventory/sample
directory and name it /inventory/my-cluster
. After that, modify the following files according to your servers:
inventory/my-cluster/hosts.ini
Remember to exchange your SSH public key with the rest of the servers, else Ansible will fail to launch the commands.
inventory/my-cluster/group_vars/all.yml
# this is the user that has ssh access to these machines
# You have to exchange your public SSH key with them
ansible_user: "common-user-to-all-servers"
# apiserver_endpoint is virtual ip-address which will be configured on each master
# Make sure this IP is within the network range and is reachable
apiserver_endpoint: "192.168.30.222"
# k3s_token is required masters can talk together securely
# this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password"
# metallb ip range for load balancer
# You might want to avoid matching ranges with your network
metal_lb_ip_range: "192.168.30.80-192.168.30.90"
Installation
Ansible playbook launch
Now launch the following command from the repository root:
Ansible command
Check for any errors during the ansible-playbook execution. If everything is OK, proceed to copy the k3s cluster config into the central server:
Copying cluster config and testing
Check what’s wrong with the ansible steps when reaching workers
After restarting the nodes and launching the playbook several times, now all nodes appear in the k3s cluster:
kubectl check of nodes
~/k3s-ansible # kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-control-1 Ready control-plane,etcd,master 22m v1.24.10+k3s1
k3s-control-2 Ready control-plane,etcd,master 22m v1.24.10+k3s1
k3s-control-3 Ready control-plane,etcd,master 22m v1.24.10+k3s1
k3s-worker-1 Ready <none> 5m14s v1.24.10+k3s1
k3s-worker-2 Ready <none> 5m2s v1.24.10+k3s1
Deployment example
The git
repository has a deployment example of nginx web servers configured to 3 replicas as well as a load balancer service for it.
Once deployed with the following commands:
Deploying our first application
We should see 3 nginx pods and a service:
Pods deployed on different nodes
~/k3s-ansible # kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6fb79bc456-79v99 1/1 Running 0 5m47s 10.42.4.3 k3s-worker-2 <none> <none>
nginx-6fb79bc456-gj6zh 1/1 Running 0 5m47s 10.42.3.3 k3s-worker-1 <none> <none>
nginx-6fb79bc456-qj472 1/1 Running 0 5m47s 10.42.4.2 k3s-worker-2 <none> <none>
Load-balancer service for the Nginx replicas
~/k3s-ansible # kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 34m <none>
nginx LoadBalancer 10.43.170.67 10.0.0.11 80:30466/TCP 6m16s app=nginx
~/k3s-ansible # kubectl describe service nginx
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: LoadBalancer # Here we see that the service type is a load balancer between all the available nginx replicas
IP Family Policy: PreferDualStack
IP Families: IPv4
IP: 10.43.170.67 # This is the Cluster IP assigned to the service
IPs: 10.43.170.67
LoadBalancer Ingress: 10.0.0.11 # This is the IP assigned by the MetalLB controller as an Ingress access point
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30466/TCP
Endpoints: 10.42.3.3:80,10.42.4.2:80,10.42.4.3:80 # Here we can see the endpoints of the nginx replicas
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 6m34s metallb-controller Assigned IP ["10.0.0.11"]
Normal nodeAssigned 6m34s metallb-speaker announcing from node "k3s-control-2" with protocol "layer2"