Setting Up a K3s Kubernetes Cluster for Home Lab Development

04 Dec 2025

Introduction

As part of my DevOps learning journey, I set up a production-grade Kubernetes environment in my home lab. Instead of using Minikube (which is only for local development), I chose K3s - a lightweight Kubernetes distribution that's actually used in production environments for edge computing and IoT applications.

This tutorial covers the complete setup: installing K3s on an Ubuntu server and configuring remote access from my workstation, enabling a professional workflow similar to what DevOps engineers use daily.

My Setup

Server Specifications:

Workstation:

Why K3s?

Before diving in, let me explain why I chose K3s over other options:

K3s is perfect for home labs because it leaves resources for other applications while providing a real Kubernetes experience that's 100% compatible with standard K8s.

Part 1: Installing K3s on the Server

Step 1: Connect to Your Server

From my workstation, I connected via SSH:

ssh username@server-ip

Step 2: Install K3s

The installation is surprisingly simple - just one command:

curl -sfL https://get.k3s.io | sh -

This script downloads and installs K3s, sets it up as a systemd service, and starts it automatically. The installation takes about 1 minute.

Step 3: Verify Installation

Check that K3s is running:

sudo systemctl status k3s

You should see "active (running)" in green.

Step 4: Verify the Cluster

Check that your node is ready:

sudo k3s kubectl get nodes

Output should show:

NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   73s   v1.33.6+k3s1

Part 2: Remote Access from Your Workstation

Running kubectl commands via SSH every time is inefficient. Here's how to control your cluster directly from your local machine.

Step 1: Export Cluster Configuration

On the server, display the K3s configuration:

sudo cat /etc/rancher/k3s/k3s.yaml

Copy the entire output.

Step 2: Configure kubectl on Your Workstation

On your local machine (not the server), create the kubectl config directory:

mkdir -p ~/.kube

Create the config file:

nano ~/.kube/config

Paste the content you copied, but critically important: change this line:

server: https://127.0.0.1:6443

To:

server: https://YOUR-SERVER-IP:6443

Save the file (Ctrl+O, Enter, Ctrl+X).

Step 3: Install kubectl

If you don't have kubectl installed on your workstation:

sudo snap install kubectl --classic

Step 4: Verify Remote Connection

From your workstation, test the connection:

kubectl get nodes

You should see your server node listed as Ready. Congratulations! You can now control your Kubernetes cluster from your workstation.

Step 5: Understand the Connection

Let me clarify how this works, as it's crucial to understand:

  1. Your workstation: Acts as your "control console"

    • You write YAML files here
    • You execute kubectl commands here
  2. When you run kubectl apply:

    • The command executes on your workstation
    • kubectl connects to the server (via network, port 6443)
    • It sends instructions to the cluster
    • The server receives orders and creates pods/deployments
  3. The server: Where containers actually run

    • Kubernetes lives here
    • Pods and applications execute here

Analogy: Your workstation is the remote control, the server is the TV. You press buttons on the remote (kubectl), but the action happens on the TV (server).

Verifying Your Context

To see which cluster kubectl is connected to:

kubectl config get-contexts

To verify the server IP:

kubectl config view | grep server

This should show server: https://YOUR-SERVER-IP:6443

Key Concepts Learned

Namespaces

Namespaces are like folders in Kubernetes for organizing and isolating resources. Benefits include:

Create a namespace:

kubectl create namespace my-app

List namespaces:

kubectl get namespaces

The kubectl Configuration File

The ~/.kube/config file tells kubectl:

This file enables the remote management we set up.

Common Commands for Daily Use

# View all resources in a namespace
kubectl get all -n namespace-name

# View pods with detailed information (including which node they're on)
kubectl get pods -n namespace-name -o wide

# View logs from a pod
kubectl logs pod-name -n namespace-name

# Describe a resource (useful for debugging)
kubectl describe pod pod-name -n namespace-name

# Apply configuration files
kubectl apply -f file.yaml -n namespace-name

# Apply all files in a directory
kubectl apply -f . -n namespace-name

# Delete resources
kubectl delete -f file.yaml -n namespace-name

What's Next?

Now that you have a working Kubernetes cluster with remote access configured, you're ready to deploy applications. In my next post, I'll walk through deploying a multi-container voting application to demonstrate:

Conclusion

Setting up K3s provided me with a production-like Kubernetes environment that uses minimal resources while teaching real-world DevOps practices. The remote access configuration enables a professional workflow where I can manage the cluster from my workstation, just like DevOps engineers manage production clusters.

This setup is now the foundation for my learning projects, all of which I document on this blog and publish to my GitHub.

Resources