k3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. Developed by Rancher Labs, k3s strips away many optional and legacy features to achieve a binary size of less than 100MB, making it perfect for edge computing scenarios.
What Makes k3s Different from Standard Kubernetes
Unlike traditional Kubernetes distributions that can be complex to install and manage, k3s provides a simplified experience while maintaining full Kubernetes API compatibility. Here are the key differences:
- Single Binary: Everything packaged in a single ~100MB binary
- Simplified Installation: Single command installation process
- Lower Resource Requirements: Runs efficiently on systems with as little as 512MB RAM
- Built-in Components: Includes essential components like storage, networking, and ingress out of the box
- Edge-Optimized: Designed specifically for edge computing and IoT scenarios
System Requirements
Before installing k3s, ensure your Linux system meets these minimum requirements:
Minimum System Requirements:
- RAM: 512MB (1GB+ recommended)
- CPU: 1 core (2+ cores recommended)
- Storage: 2GB available disk space
- OS: Linux kernel 3.10+ (Ubuntu 16.04+, CentOS 7+, RHEL 7+)
- Network: Outbound internet access for initial setup
Installing k3s on Linux
Quick Installation (Server Node)
The simplest way to install k3s is using the official installation script:
# Download and install k3s server
curl -sfL https://get.k3s.io | sh -
# Check installation status
sudo systemctl status k3s
Expected Output:
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2025-08-26 08:35:12 IST; 2min ago
Docs: https://k3s.io
Process: 1234 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Main PID: 1245 (k3s-server)
Tasks: 89
Memory: 512.5M
CPU: 15.234s
Custom Installation with Options
For production environments, you’ll often need custom configurations:
# Install with custom data directory and disable Traefik
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--data-dir /opt/k3s --disable traefik" sh -
# Install specific version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.28.2+k3s1" sh -
# Install with custom cluster secret
curl -sfL https://get.k3s.io | K3S_CLUSTER_SECRET="mysecrettoken" sh -
Configuring kubectl Access
After installation, configure kubectl to interact with your k3s cluster:
# Copy k3s kubeconfig to default location
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
# Or export KUBECONFIG environment variable
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Test cluster access
kubectl get nodes
Expected Output:
NAME STATUS ROLES AGE VERSION ubuntu-vm Ready control-plane,master 5m v1.28.2+k3s1
Adding Worker Nodes
To create a multi-node cluster, you’ll need to add worker nodes. First, retrieve the node token from the server:
# On the server node, get the node token
sudo cat /var/lib/rancher/k3s/server/node-token
Then on each worker node, run:
# Replace SERVER_IP and NODE_TOKEN with actual values
curl -sfL https://get.k3s.io | K3S_URL=https://SERVER_IP:6443 K3S_TOKEN=NODE_TOKEN sh -
Deploying Your First Application
Let’s deploy a simple nginx application to test our k3s cluster:
# Create a deployment
kubectl create deployment nginx --image=nginx:latest
# Expose the deployment
kubectl expose deployment nginx --type=NodePort --port=80
# Check the deployment status
kubectl get deployments
kubectl get pods
kubectl get services
Expected Output:
NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 2m NAME READY STATUS RESTARTS AGE nginx-7854ff8877-k2x9p 1/1 Running 0 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m nginx NodePort 10.43.200.123 <none> 80:32456/TCP 1m
Using YAML Manifests
For more complex deployments, create YAML manifest files:
# Create a deployment manifest
cat <<EOF > nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
type: NodePort
EOF
# Apply the manifest
kubectl apply -f nginx-deployment.yaml
k3s Configuration and Customization
Configuration File
Instead of passing command-line flags, you can use a configuration file:
# Create k3s config directory
sudo mkdir -p /etc/rancher/k3s
# Create configuration file
sudo cat <<EOF > /etc/rancher/k3s/config.yaml
# Cluster configuration
cluster-init: true
token: "my-shared-secret"
# Networking
flannel-backend: "vxlan"
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
# Storage
default-local-storage-path: "/opt/local-path-provisioner"
# Disable components
disable:
- traefik
- servicelb
# Node configuration
node-name: "k3s-master"
node-label:
- "node-type=master"
node-taint:
- "master=true:NoSchedule"
EOF
# Restart k3s to apply configuration
sudo systemctl restart k3s
Resource Management
Configure resource limits and requests for better cluster management:
# Create a resource quota
cat <<EOF > resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: default
spec:
hard:
requests.cpu: "2"
requests.memory: 2Gi
limits.cpu: "4"
limits.memory: 4Gi
pods: "10"
EOF
kubectl apply -f resource-quota.yaml
# Check quota usage
kubectl describe quota compute-quota
Networking in k3s
Built-in Networking Components
k3s comes with Flannel as the default CNI (Container Network Interface) plugin:
# Check network pods
kubectl get pods -n kube-system | grep flannel
# View network configuration
kubectl get nodes -o wide
# Check cluster network settings
kubectl cluster-info dump | grep -i cidr
Ingress Configuration
k3s includes Traefik as the default ingress controller. Here’s how to use it:
# Create an ingress resource
cat <<EOF > nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- host: nginx.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
EOF
kubectl apply -f nginx-ingress.yaml
# Check ingress status
kubectl get ingress
Storage Management
k3s includes a local path provisioner for persistent storage:
# Check available storage classes
kubectl get storageclass
# Create a persistent volume claim
cat <<EOF > pvc-example.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-storage-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 1Gi
EOF
kubectl apply -f pvc-example.yaml
# Check PVC status
kubectl get pvc
Monitoring and Maintenance
Cluster Health Checks
# Check cluster components
kubectl get componentstatuses
# View cluster events
kubectl get events --sort-by=.metadata.creationTimestamp
# Check node resources
kubectl top nodes
kubectl top pods
# Describe node details
kubectl describe node <node-name>
Log Management
# View k3s service logs
sudo journalctl -u k3s -f
# Check pod logs
kubectl logs <pod-name>
# Get logs from all containers in a pod
kubectl logs <pod-name> --all-containers=true
# Follow logs in real-time
kubectl logs -f deployment/nginx
Security Best Practices
RBAC Configuration
Implement Role-Based Access Control for better security:
# Create a service account
kubectl create serviceaccount developer
# Create a role with limited permissions
cat <<EOF > developer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "delete"]
EOF
kubectl apply -f developer-role.yaml
# Bind the role to the service account
kubectl create rolebinding developer-binding \
--role=developer-role \
--serviceaccount=default:developer
Network Policies
# Create a network policy to restrict traffic
cat <<EOF > network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
EOF
kubectl apply -f network-policy.yaml
Backup and Recovery
Regular backups are crucial for maintaining cluster reliability:
# Create etcd snapshot (backup)
sudo k3s etcd-snapshot save
# List available snapshots
sudo k3s etcd-snapshot ls
# Restore from snapshot
sudo systemctl stop k3s
sudo k3s server --cluster-reset --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/<snapshot-name>
sudo systemctl start k3s
Upgrading k3s
Keep your k3s installation up to date:
# Check current version
k3s --version
# Upgrade to latest version
curl -sfL https://get.k3s.io | sh -
# Upgrade to specific version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.28.3+k3s1" sh -
# Verify upgrade
kubectl version --short
Troubleshooting Common Issues
Common Issues and Solutions:
1. Node Not Ready
# Check node status
kubectl describe node <node-name>
# Restart k3s service
sudo systemctl restart k3s
2. Pod Stuck in Pending State
# Check resource availability
kubectl describe pod <pod-name>
kubectl top nodes
3. Network Connectivity Issues
# Check CNI pods
kubectl get pods -n kube-system
# Restart flannel if needed
kubectl delete pods -n kube-system -l app=flannel
Uninstalling k3s
When you need to completely remove k3s from your system:
# For server nodes
sudo /usr/local/bin/k3s-uninstall.sh
# For agent nodes
sudo /usr/local/bin/k3s-agent-uninstall.sh
# Clean up remaining files (optional)
sudo rm -rf /etc/rancher/k3s
sudo rm -rf /var/lib/rancher/k3s
Conclusion
k3s provides an excellent lightweight alternative to traditional Kubernetes distributions, especially for edge computing, IoT devices, and development environments. Its simplified installation process, reduced resource requirements, and full Kubernetes API compatibility make it an ideal choice for modern containerized applications.
By following this comprehensive guide, you should now have a solid understanding of how to install, configure, and manage k3s clusters. Remember to regularly backup your cluster data, keep your installation updated, and follow security best practices to maintain a robust and secure Kubernetes environment.
Whether you’re running a single-node development cluster or a multi-node production environment, k3s offers the flexibility and simplicity needed to get your containerized applications up and running quickly and efficiently.








