I recently set up a K3s cluster on my local machine using Incus VM for the nodes. The process was surprisingly smooth, thanks to a great tool called k3sup. This post is a quick walkthrough of the steps I took, based on my shell history.
First, I needed a few virtual machines to act as my Kubernetes nodes. I decided on one manager and three worker nodes. I used the incus command to launch Debian 13 (Trixie) VMs.
I started by creating the three worker nodes:
for x in 1 2 3; do \
incus launch images:debian/trixie/amd64 worker-$x --config limits.cpu=2 --config limits.memory=2GiB --vm; \
done
And then the manager node:
incus launch images:debian/trixie/amd64 manager --config limits.cpu=2 --config limits.memory=2GiB --vm
To manage the nodes, especially for k3sup to connect to them, I needed a user with passwordless sudo and SSH key access.
I created a user named sumar on all four machines:
# For workers
for x in 1 2 3; do incus exec worker-$x -- useradd sumar; done
# For manager
incus exec manager -- useradd sumar
Next, I set up the SSH directory and authorized_keys file for the sumar user on each node.
# Create .ssh directory for workers
for x in 1 2 3; do \
incus exec worker-$x -- bash -c "mkdir -p /home/sumar/.ssh && chown sumar:sumar /home/sumar/.ssh"; \
done
# Create .ssh directory for manager
incus exec manager -- bash -c "mkdir -p /home/sumar/.ssh && chown sumar:sumar /home/sumar/.ssh"
# Add my public key to all nodes
for x in 1 2 3; do \
incus exec worker-$x -- bash -c "echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG2afn36BgoIRz3s54F2oHoypudzZkYeoG/8nQyf+4Dj for-incus' > /home/sumar/.ssh/authorized_keys"; \
done
incus exec manager -- bash -c "echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG2afn36BgoIRz3s54F2oHoypudzZkYeoG/8nQyf+4Dj for-incus' > /home/sumar/.ssh/authorized_keys"
I also needed to install an SSH server and grant sudo rights.
# Install OpenSSH Server and other tools
for x in 1 2 3; do incus exec worker-$x -- apt update && apt install -y openssh-server curl; done
incus exec manager -- apt update && apt install -y openssh-server curl
# Grant passwordless sudo
for x in 1 2 3; do \
incus exec worker-$x -- bash -c "echo 'sumar ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/sumar"; \
done
incus exec manager -- bash -c "echo 'sumar ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/sumar"
k3sup is a utility that bootstraps K3s clusters over SSH. Follow this guide to install it: https://github.com/alexellis/k3sup?tab=readme-ov-file#download-k3sup-tldr
With the nodes and k3sup ready, it was time to create the cluster.
First, I installed the K3s control plane on the manager node. I made sure to grab the IP address of the manager from incus ls.
# Get manager IP from `incus ls`
export IP=10.50.147.174
k3sup install \
--ip $IP \
--user sumar \
--ssh-key /home/sumar/.ssh/incus \
--context k3s \
--no-extras \
--k3s-extra-args '--flannel-backend wireguard-native'
This command installs K3s, sets up the flannel CNI with a wireguard-native backend, and downloads a kubeconfig file to my local machine.
I then set my KUBECONFIG environment variable and checked the node status:
export KUBECONFIG=$(pwd)/kubeconfig
kubectl get node -o wide
At this point, only the manager node was visible.
Finally, I joined the three worker nodes to the cluster. I got their IP addresses from incus ls.
# IPs from `incus ls`
WORKER_IPS="10.50.147.205 10.50.147.200 10.50.147.191"
SERVER_IP="10.50.147.174"
for WORKER_IP in $WORKER_IPS; do \
k3sup join \
--ip $WORKER_IP \
--server-ip $SERVER_IP \
--user sumar \
--ssh-key /home/sumar/.ssh/incus; \
done
After a few moments, I checked the nodes again:
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
manager Ready control-plane,master 10m v1.32.6+k3s1 10.50.147.174 <none> Debian GNU/Linux 13 (trixie) 6.12.35+deb13-amd64 containerd://2.0.5-k3s1.32
worker-1 Ready <none> 4m42s v1.32.6+k3s1 10.50.147.205 <none> Debian GNU/Linux 13 (trixie) 6.12.35+deb13-amd64 containerd://2.0.5-k3s1.32
worker-2 Ready <none> 3m41s v1.32.6+k3s1 10.50.147.200 <none> Debian GNU/Linux 13 (trixie) 6.12.35+deb13-amd64 containerd://2.0.5-k3s1.32
worker-3 Ready <none> 2m45s v1.32.6+k3s1 10.50.147.191 <none> Debian GNU/Linux 13 (trixie) 6.12.35+deb13-amd64 containerd://2.0.5-k3s1.32
Success! All four nodes (1 manager, 3 workers) were listed as Ready. The cluster was up and running.