00 - Kubernetes - Getting Started

Kubernetes
Ansible
k3s
Building our first Kubernetes Cluster
Author

ProtossGP32

Published

February 18, 2023

Introduction

Read the Kubernetes Official Docs for a deeper explanation.

Kubernetes is a container orchestrator engine. What this means is that it manages all the infrastructure resources involved in a microservice deployment, such as networking, number of replicas, access to their endpoints, load balancing requests, etc… Its main purpose is to automate the deployment, scaling and management of containerized applications.

It can be daunting and complex due to the extension of its features, so we’ll try to approach it in the most simple and maintainable way.

Main concepts

Kubernetes components: A Kubernetes cluster is mainly composed of the following components:

  • Control Plane components: these make global decisions about the cluster (scheduling pods) as well as detecting and responding to cluster events
  • Node components: these run on every node, maintaining running pods and providing the Kubernetes runtime environment
  • Addons: they use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features

As described above, usually all control plane components are deployed into the same machine or node. Another usual thing to do is to isolate the control plane nodes to prevent them to act as workers; this ensures that their critical mission of overseeing the whole cluster isn’t compromised due to a highly demanding user container.

When dealing with High-Availability, we’ll configure more than one control plane node to ensure redundancy.

Requirements

In order for us to easily deploy our first Kubernetes cluster, we’ll rely on TechnoTim’s guide to deploy k3s with Ansible playbooks. This guide guarantees a 100% automated k3s deployment, so we’ll give it a try.

For this guide, we’ll need the following resources:

  • A central machine from where we’ll launch Ansible playbooks and access our k3s cluster:

    We’ll be using an LXC with Alpine installed (128MB of RAM and 2GB of disk)

    Installing kubectl and ansible on alpine
    # Installing kubectl
    apk add curl
    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    mv kubectl /sbin/
    chmod u+x /sbin/kubectl
    
    # Installing ansible
    apk add ansible
Enable SSH in LXC container

Accessing LXC through serial console attachment is a pain. When creating it in Proxmox, we had the chance to add our SSH key, but OpenSSH isn’t installed by default in Alpine images. Attach to the LXC and do the following:

Alpine: install OpenSSH and enable it on boot
apk add openssh
rc-update add sshd

Check that your public key is already shared with the container:

Alpine: authorized keys
cat ~/.ssh/authorized_keys
  • At least 5 VMs for our cluster (2 CPU, 2GB of RAM and at least 5GB of disk):
    • 3 for control plane:
      • k3s-control-1
      • k3s-control-2
      • k3s-control-3
    • 2 for workers:
      • k3s-worker-1
      • k3s-worker-2

Ansible configuration

Follow the guide’s instructions, clone its git repository and make sure you update the following files. Make a copy of the /inventory/sample directory and name it /inventory/my-cluster. After that, modify the following files according to your servers:

inventory/my-cluster/hosts.ini
[master]
k3s-control-1
k3s-control-2
k3s-control-3

[node]
k3s-worker-1
k3s-worker-2
SSH keys

Remember to exchange your SSH public key with the rest of the servers, else Ansible will fail to launch the commands.

inventory/my-cluster/group_vars/all.yml
# this is the user that has ssh access to these machines
# You have to exchange your public SSH key with them
ansible_user: "common-user-to-all-servers"

# apiserver_endpoint is virtual ip-address which will be configured on each master
# Make sure this IP is within the network range and is reachable
apiserver_endpoint: "192.168.30.222"

# k3s_token is required  masters can talk together securely
# this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password"

# metallb ip range for load balancer
# You might want to avoid matching ranges with your network     
metal_lb_ip_range: "192.168.30.80-192.168.30.90"
Using Alpine server for Ansible

You may need to launch the following commands:

# For cluster servers in different subnets
apk add py3-netaddr

# For installing ansible-galaxy requirements
# Regenerate SSL certificates
# Guide: https://wiki.alpinelinux.org/wiki/Generating_SSL_certs_with_ACF
/sbin/setup-acf
apk add acf-openssl

Installation

Ansible playbook launch

Now launch the following command from the repository root:

Ansible command
# Install ansible requirements
ansible-galaxy install -r ./collections/requirements.yml

# Launch the installation playbook
ansible-playbook ./site.yml -i ./inventory/my-cluster/hosts.ini

Check for any errors during the ansible-playbook execution. If everything is OK, proceed to copy the k3s cluster config into the central server:

Copying cluster config and testing
# Make sure the '.kube' folder exists in the local server
mkdir ~/.kube

# Copy the config from one of the master nodes
scp common-user-to-all-servers@k3s-control-1:~/.kube/config ~/.kube/config

# Check that we can access the cluster
sudo kubectl get nodes
Worker nodes not showing in the cluster in the first attempt

Check what’s wrong with the ansible steps when reaching workers

After restarting the nodes and launching the playbook several times, now all nodes appear in the k3s cluster:

kubectl check of nodes
~/k3s-ansible # kubectl get nodes
NAME            STATUS   ROLES                       AGE     VERSION
k3s-control-1   Ready    control-plane,etcd,master   22m     v1.24.10+k3s1
k3s-control-2   Ready    control-plane,etcd,master   22m     v1.24.10+k3s1
k3s-control-3   Ready    control-plane,etcd,master   22m     v1.24.10+k3s1
k3s-worker-1    Ready    <none>                      5m14s   v1.24.10+k3s1
k3s-worker-2    Ready    <none>                      5m2s    v1.24.10+k3s1

Deployment example

The git repository has a deployment example of nginx web servers configured to 3 replicas as well as a load balancer service for it.

Once deployed with the following commands:

Deploying our first application
kubectl apply -f example/deployment.yml
kubectl apply -f example/service.yml

We should see 3 nginx pods and a service:

Pods deployed on different nodes
~/k3s-ansible # kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP          NODE           NOMINATED NODE   READINESS GATES
nginx-6fb79bc456-79v99   1/1     Running   0          5m47s   10.42.4.3   k3s-worker-2   <none>           <none>
nginx-6fb79bc456-gj6zh   1/1     Running   0          5m47s   10.42.3.3   k3s-worker-1   <none>           <none>
nginx-6fb79bc456-qj472   1/1     Running   0          5m47s   10.42.4.2   k3s-worker-2   <none>           <none>
Load-balancer service for the Nginx replicas
~/k3s-ansible # kubectl get service -o wide
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
kubernetes   ClusterIP      10.43.0.1      <none>        443/TCP        34m     <none>
nginx        LoadBalancer   10.43.170.67   10.0.0.11     80:30466/TCP   6m16s   app=nginx

~/k3s-ansible # kubectl describe service nginx
Name:                     nginx
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=nginx
Type:                     LoadBalancer # Here we see that the service type is a load balancer between all the available nginx replicas
IP Family Policy:         PreferDualStack
IP Families:              IPv4
IP:                       10.43.170.67 # This is the Cluster IP assigned to the service
IPs:                      10.43.170.67
LoadBalancer Ingress:     10.0.0.11 # This is the IP assigned by the MetalLB controller as an Ingress access point
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30466/TCP
Endpoints:                10.42.3.3:80,10.42.4.2:80,10.42.4.3:80 # Here we can see the endpoints of the nginx replicas
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   6m34s  metallb-controller  Assigned IP ["10.0.0.11"]
  Normal  nodeAssigned  6m34s  metallb-speaker     announcing from node "k3s-control-2" with protocol "layer2"