04 Dec 2025
As part of my DevOps learning journey, I set up a production-grade Kubernetes environment in my home lab. Instead of using Minikube (which is only for local development), I chose K3s - a lightweight Kubernetes distribution that's actually used in production environments for edge computing and IoT applications.
This tutorial covers the complete setup: installing K3s on an Ubuntu server and configuring remote access from my workstation, enabling a professional workflow similar to what DevOps engineers use daily.
Server Specifications:
Workstation:
Before diving in, let me explain why I chose K3s over other options:
K3s is perfect for home labs because it leaves resources for other applications while providing a real Kubernetes experience that's 100% compatible with standard K8s.
From my workstation, I connected via SSH:
ssh username@server-ip
The installation is surprisingly simple - just one command:
curl -sfL https://get.k3s.io | sh -
This script downloads and installs K3s, sets it up as a systemd service, and starts it automatically. The installation takes about 1 minute.
Check that K3s is running:
sudo systemctl status k3s
You should see "active (running)" in green.
Check that your node is ready:
sudo k3s kubectl get nodes
Output should show:
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 73s v1.33.6+k3s1
Running kubectl commands via SSH every time is inefficient. Here's how to control your cluster directly from your local machine.
On the server, display the K3s configuration:
sudo cat /etc/rancher/k3s/k3s.yaml
Copy the entire output.
On your local machine (not the server), create the kubectl config directory:
mkdir -p ~/.kube
Create the config file:
nano ~/.kube/config
Paste the content you copied, but critically important: change this line:
server: https://127.0.0.1:6443
To:
server: https://YOUR-SERVER-IP:6443
Save the file (Ctrl+O, Enter, Ctrl+X).
If you don't have kubectl installed on your workstation:
sudo snap install kubectl --classic
From your workstation, test the connection:
kubectl get nodes
You should see your server node listed as Ready. Congratulations! You can now control your Kubernetes cluster from your workstation.
Let me clarify how this works, as it's crucial to understand:
Your workstation: Acts as your "control console"
kubectl commands hereWhen you run kubectl apply:
kubectl connects to the server (via network, port 6443)The server: Where containers actually run
Analogy: Your workstation is the remote control, the server is the TV. You press buttons on the remote (kubectl), but the action happens on the TV (server).
To see which cluster kubectl is connected to:
kubectl config get-contexts
To verify the server IP:
kubectl config view | grep server
This should show server: https://YOUR-SERVER-IP:6443
Namespaces are like folders in Kubernetes for organizing and isolating resources. Benefits include:
Create a namespace:
kubectl create namespace my-app
List namespaces:
kubectl get namespaces
The ~/.kube/config file tells kubectl:
This file enables the remote management we set up.
# View all resources in a namespace
kubectl get all -n namespace-name
# View pods with detailed information (including which node they're on)
kubectl get pods -n namespace-name -o wide
# View logs from a pod
kubectl logs pod-name -n namespace-name
# Describe a resource (useful for debugging)
kubectl describe pod pod-name -n namespace-name
# Apply configuration files
kubectl apply -f file.yaml -n namespace-name
# Apply all files in a directory
kubectl apply -f . -n namespace-name
# Delete resources
kubectl delete -f file.yaml -n namespace-name
Now that you have a working Kubernetes cluster with remote access configured, you're ready to deploy applications. In my next post, I'll walk through deploying a multi-container voting application to demonstrate:
Setting up K3s provided me with a production-like Kubernetes environment that uses minimal resources while teaching real-world DevOps practices. The remote access configuration enables a professional workflow where I can manage the cluster from my workstation, just like DevOps engineers manage production clusters.
This setup is now the foundation for my learning projects, all of which I document on this blog and publish to my GitHub.