Setup your personal Kubernetes cluster with k3s and k3d

Suren Raju
7 min readApr 18, 2021

There are a lot of reasons why you might want to have your personal Kubernetes cluster. Kubernetes cluster on your development machine, giving you fast iteration times in a production-like environment.

There are several option to setup a Kubernetes cluster on a local machine for development or testing purposes. But with a full-blown Kubernetes cluster running on your local machine, you will soon hit a wall if you want to play with multi-node cluster or multiple clusters on the same machine.

To address this issue, we will be looking into how we can setup a lightweight Kubernetes cluster using k3s and k3d in our local machine.

What is k3s?

K3s is a lightweight, easy-to-use, CNCF-certified Kubernetes distribution of Kubernetes created at Rancher Labs. Designed for low-resource environments, K3s is distributed as a single <40MB binary that uses under 512MB of RAM.

If you are interested in what makes k3s so light, you can watch the talk on k3s under the hood.

What is k3d?

k3d is a lightweight wrapper for running a K3s cluster in Docker. k3d makes it very easy to create single and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.

k3d uses a Docker image built from the K3s repository to spin up multiple K3s nodes in Docker containers on any machine with Docker installed. That way, a single physical (or virtual) machine (let’s call it Docker Host) can run multiple K3s clusters, with multiple server and agent nodes each, simultaneously.

If you are interested in understanding more on k3d, you can watch the talk on Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d

Installation

Installation is very easy and available through many installers such as wget, curl, Homebrew, Aur etc. and supports all well known OSes (linux, darwin, windows) and processor architectures (386, amd64)

I am installing using the following comments in Ubuntu 16.04. Please refer instructions from the official documentation for your environment.

wget -q -O — https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

Let us verify the installed version

k3d --versionk3d version v4.4.1
k3s version v1.20.5-k3s1 (default)

List existing clusters

k3d cluster listNAME SERVERS AGENTS LOADBALANCER

Of course, there are no existing clusters.

Cluster Creation — The “Simple” Way

Lets create a first k3d cluster by running the following comments

k3d cluster createINFO[0000] Prep: Network 
INFO[0000] Created network ‘k3d-k3s-default’
INFO[0000] Created volume ‘k3d-k3s-default-images’
INFO[0001] Creating node ‘k3d-k3s-default-server-0’
INFO[0001] Creating LoadBalancer ‘k3d-k3s-default-serverlb’
INFO[0001] Starting cluster ‘k3s-default’
INFO[0001] Starting servers…
INFO[0001] Starting Node ‘k3d-k3s-default-server-0’
INFO[0009] Starting agents…
INFO[0009] Starting helpers…
INFO[0009] Starting Node ‘k3d-k3s-default-serverlb’
INFO[0009] (Optional) Trying to get IP of the docker host and inject it into the cluster as ‘host.k3d.internal’ for easy access
INFO[0013] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0013] Cluster ‘k3s-default’ created successfully!
INFO[0013] — kubeconfig-update-default=false → sets — kubeconfig-switch-context=false
INFO[0014] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info

By default, k3d cluster create command creates a single node cluster with a default name.

Use help command to know all possible parameters for cluster create command

k3d cluster create --help

Let us list the created cluster

k3d cluster listNAME SERVERS AGENTS LOADBALANCER
k3s-default 1/1 0/0 true

Now, we can see a cluster created with default name k3s-default

by default, cluster create command also configured kube config file under ~/.kube/config

cat ~/.kube/config//Above command displays to content of config filekubectl cluster-infoKubernetes master is running at https://0.0.0.0:41218
CoreDNS is running at https://0.0.0.0:41218/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:41218/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
kubectl get nodesNAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 8m29s v1.20.5+k3s1

docker ps will show the underlying containers created by the cluster create command

docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
119d0da8c77f rancher/k3d-proxy:v4.4.1 “/bin/sh -c nginx-pr…” 10 minutes ago Up 1 minutes 80/tcp, 0.0.0.0:41218->6443/tcp k3d-k3s-default-serverlb
20c9a0606ee6 rancher/k3s:v1.20.5-k3s1 “/bin/k3s server — t…” 10 minutes ago Up 1 minutes k3d-k3s-default-server-0

Let us clean up the created resources

k3d cluster deleteINFO[0000] Deleting cluster ‘k3s-default’ 
INFO[0000] Deleted k3d-k3s-default-serverlb
INFO[0001] Deleted k3d-k3s-default-server-0
INFO[0001] Deleting cluster network ‘k3d-k3s-default’
INFO[0001] Deleting image volume ‘k3d-k3s-default-images’
INFO[0001] Removing cluster details from default kubeconfig…
INFO[0001] Removing standalone kubeconfig file (if there is one)…
INFO[0001] Successfully deleted cluster k3s-default!

You can use the following command to create a single node cluster with name dev-cluster and the following port requirements

  • add a mapping of local host port 8080 to loadbalancer port 80, which will proxy requests to port 80 on all agent nodes
  • add a mapping of local host port 8443 to loadbalancer port 443, which will proxy requests to port 443 on all agent nodes
k3d cluster create dev-cluster --port 8080:80@loadbalancer --port 8443:443@loadbalancer

Let us test the cluster by deploying and exposing a simple nginx container application

kubectl create deployment nginx --image=nginxdeployment.apps/nginx createdkubectl create service clusterip nginx --tcp=80:80service/nginx createdcat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
EOF
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.networking.k8s.io/nginx created

Now you can access the nginx application from your local machine on http://localhost:8080

Cluster Creation — The “Simple but Sophisticated” Way

Let us create a k3s cluster with name dev-cluster with three master and three worker nodes.

Port Mapping Requirements

  • add a mapping of local host port 8080 to loadbalancer port 80, which will proxy requests to port 80 on all agent nodes
  • add a mapping of local host port 8443 to loadbalancer port 443, which will proxy requests to port 443 on all agent nodes
  • add a mapping of local host port 6443 to loadbalancer port 6443 so that the load balancer will be the access point to the Kubernetes API, so even for multi-server clusters, you only need to expose a single api port. The load balancer will then take care of proxying your requests to the appropriate server node
  • You may as well expose a NodePort range (if you want to avoid the Ingress Controller) by -p “32000–32767:32000–32767@loadbalancer”
k3d cluster create dev-cluster --port 8080:80@loadbalancer --port 8443:443@loadbalancer --api-port 6443 --servers 3 --agents 3INFO[0000] Prep: Network 
INFO[0000] Created network ‘k3d-dev-cluster’
INFO[0000] Created volume ‘k3d-dev-cluster-images’
INFO[0000] Creating initializing server node
INFO[0000] Creating node ‘k3d-dev-cluster-server-0’
INFO[0001] Creating node ‘k3d-dev-cluster-server-1’
INFO[0002] Creating node ‘k3d-dev-cluster-server-2’
INFO[0002] Creating node ‘k3d-dev-cluster-agent-0’
INFO[0002] Creating node ‘k3d-dev-cluster-agent-1’
INFO[0002] Creating node ‘k3d-dev-cluster-agent-2’
INFO[0002] Creating LoadBalancer ‘k3d-dev-cluster-serverlb’
INFO[0002] Starting cluster ‘dev-cluster’
INFO[0002] Starting the initializing server…
INFO[0002] Starting Node ‘k3d-dev-cluster-server-0’
INFO[0004] Starting servers…
INFO[0004] Starting Node ‘k3d-dev-cluster-server-1’
INFO[0029] Starting Node ‘k3d-dev-cluster-server-2’
INFO[0042] Starting agents…
INFO[0042] Starting Node ‘k3d-dev-cluster-agent-0’
INFO[0051] Starting Node ‘k3d-dev-cluster-agent-1’
INFO[0059] Starting Node ‘k3d-dev-cluster-agent-2’
INFO[0067] Starting helpers…
INFO[0067] Starting Node ‘k3d-dev-cluster-serverlb’
INFO[0069] (Optional) Trying to get IP of the docker host and inject it into the cluster as ‘host.k3d.internal’ for easy access
INFO[0078] Successfully added host record to /etc/hosts in 7/7 nodes and to the CoreDNS ConfigMap
INFO[0078] Cluster ‘dev-cluster’ created successfully!
INFO[0078] — kubeconfig-update-default=false → sets — kubeconfig-switch-context=false
INFO[0078] You can now use it like this:
kubectl config use-context k3d-dev-cluster
kubectl cluster-info

Let us verify the k3d nodes

k3d node listNAME ROLE CLUSTER STATUS
k3d-dev-cluster-agent-0 agent dev-cluster running
k3d-dev-cluster-agent-1 agent dev-cluster running
k3d-dev-cluster-agent-2 agent dev-cluster running
k3d-dev-cluster-server-0 server dev-cluster running
k3d-dev-cluster-server-1 server dev-cluster running
k3d-dev-cluster-server-2 server dev-cluster running
k3d-dev-cluster-serverlb loadbalancer dev-cluster running

Let us verify the k3s cluster info and node details

kubectl cluster-infoKubernetes master is running at https://0.0.0.0:6443
CoreDNS is running at https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
kubectl get nodesNAME STATUS ROLES AGE VERSION
k3d-dev-cluster-agent-0 Ready <none> 2m51s v1.20.5+k3s1
k3d-dev-cluster-agent-1 Ready <none> 2m43s v1.20.5+k3s1
k3d-dev-cluster-agent-2 Ready <none> 2m35s v1.20.5+k3s1
k3d-dev-cluster-server-0 Ready control-plane,etcd,master 3m25s v1.20.5+k3s1
k3d-dev-cluster-server-1 Ready control-plane,etcd,master 3m12s v1.20.5+k3s1
k3d-dev-cluster-server-2 Ready control-plane,etcd,master 2m58s v1.20.5+k3s1

References:

https://en.sokube.ch/post/k3s-k3d-k8s-a-new-perfect-match-for-dev-and-test-1

--

--