k3s on EC2: First Kubernetes Deployment
Gerald went to a restaurant technology expo and came back saying “containers are the future.” He means shipping containers. But the point still stands. The expo had a booth about Kubernetes, and Gerald picked up a brochure. He now wants “orchestration” for the three-location website expansion. You asked him what orchestration means to him. He said, “Like an orchestra. But for websites.”
Docker Compose works well on a single server, but what happens when your container crashes at 3 AM? You have to notice, SSH in, and restart it manually. Kubernetes (often abbreviated k8s) solves this by acting as an orchestrator, a system that continuously monitors your containers, restarts them if they fail, and manages networking, scaling, and configuration. In this lab, you will install k3s, a lightweight Kubernetes distribution designed for resource-constrained environments, and deploy an nginx web server using Kubernetes primitives: Deployments, Services, and ConfigMaps.
Before You Start
Section titled “Before You Start”You need:
- An AWS Academy Learner Lab environment
- An SSH client on your laptop
Why Kubernetes?
Section titled “Why Kubernetes?”When you used Docker Compose, you were the orchestrator. You ran docker compose up, checked if containers were healthy, and restarted them when needed. Kubernetes replaces that manual work with a control loop: you tell it “I want 2 copies of nginx running at all times,” and Kubernetes continuously compares the desired state to the actual state, making corrections automatically. A container crashes? Kubernetes starts a new one. You push a new image? Kubernetes rolls out the update gracefully.
k3s vs. full Kubernetes: Standard Kubernetes is complex and resource-intensive. k3s is a certified Kubernetes distribution that strips out cloud-provider-specific components and bundles everything into a single binary. It runs comfortably on a t3.small instance and is ideal for learning, edge computing, and single-node deployments.
Questions
Section titled “Questions”Watch for the answers to these questions as you follow the tutorial.
- What k3s version is running on your node? Write down the node’s INTERNAL-IP address. (Use
kubectl get nodes -o wide.) (4 points) - Write down the names and IP addresses of your two nginx Pods. (Use
kubectl get pods -o wide.) (4 points) - After applying your ConfigMap with your custom
index.html, what doescurl http://localhost:<nodeport>return? Verify your name and today’s date appear. (4 points) - After deleting one Pod, how many seconds did it take for Kubernetes to create a replacement? What does this demonstrate about Deployments? (5 points)
- How do the endpoint IPs shown by
kubectl describe service nginx-servicerelate to the Pod IPs from question 2? (3 points) - Get your TA’s initials showing your custom nginx page (with your name) accessible in a browser or via
curl. (5 points)
Tutorial
Section titled “Tutorial”Installing k3s
Section titled “Installing k3s”-
Launch an EC2 instance
In the AWS Console, launch an Ubuntu 24.04 instance. Use t3.small (2 vCPU, 2 GiB RAM); k3s runs on t3.micro but performs better with a bit more memory. Create a Security Group that allows:
- SSH (port 22) from Anywhere
- HTTP (port 80) from Anywhere
- Custom TCP (ports 30000-32767) from Anywhere (this is the NodePort range Kubernetes uses)
Connect via SSH:
Terminal window ssh -i ~/Downloads/cs312-key.pem ubuntu@<your-public-ip> -
Install k3s
k3s installs with a single command:
Terminal window curl -sfL https://get.k3s.io | sh -This downloads the k3s binary, installs it as a systemd service, and starts it immediately. It also installs
kubectl, the Kubernetes command-line tool. -
Configure kubectl access
By default, k3s writes its configuration to
/etc/rancher/k3s/k3s.yaml, which is only readable by root. To usekubectlwithoutsudo:Terminal window mkdir -p ~/.kubesudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/configsudo chown $(id -u):$(id -g) ~/.kube/configexport KUBECONFIG=~/.kube/configecho 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc -
Verify the cluster
Terminal window kubectl get nodes -o wideYou should see one node with status “Ready.” Record the k3s version (in the VERSION column) and the INTERNAL-IP for your lab questions.
Understanding Kubernetes Primitives
Section titled “Understanding Kubernetes Primitives”Before writing manifests, let’s understand the building blocks:
- Pod: The smallest unit in Kubernetes. A Pod runs one or more containers. You rarely create Pods directly.
- Deployment: Manages a set of identical Pods. You tell it “run 2 replicas of nginx,” and it ensures exactly 2 Pods are always running. If one dies, the Deployment creates a replacement.
- Service: Provides a stable network endpoint for a set of Pods. Pods get random IP addresses that change when they restart, but a Service gives you a fixed way to reach them.
- ConfigMap: Stores configuration data (like files or environment variables) separately from the container image. This lets you change configuration without rebuilding the image.
Deploying nginx
Section titled “Deploying nginx”-
Create a project directory
Terminal window mkdir ~/k8s-lab && cd ~/k8s-lab -
Write the Deployment manifest
Terminal window vim nginx-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:name: nginxlabels:app: nginxspec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.27ports:- containerPort: 80Let’s break this down:
replicas: 2tells Kubernetes to keep 2 copies of this Pod running at all times.selector.matchLabelstells the Deployment which Pods it manages (those withapp: nginxlabel).templatedescribes the Pod itself: one container running thenginx:1.27image, listening on port 80.- We pin the image tag to
1.27instead of usinglatest, becauselatestis mutable and makes deployments unpredictable.
-
Write the Service manifest
A Service of type NodePort exposes the application on a static port on the node’s IP, making it accessible from outside the cluster.
Terminal window vim nginx-service.yamlapiVersion: v1kind: Servicemetadata:name: nginx-servicespec:type: NodePortselector:app: nginxports:- port: 80targetPort: 80nodePort: 30080selector: app: nginxroutes traffic to all Pods with theapp: nginxlabel.nodePort: 30080means you can access nginx athttp://<node-ip>:30080. NodePorts must be in the range 30000-32767.
-
Apply both manifests
Terminal window kubectl apply -f nginx-deployment.yaml -f nginx-service.yamlThe
applycommand sends the manifests to the Kubernetes API server, which stores the desired state and begins working to achieve it. -
Verify the Deployment
Terminal window kubectl get pods -o wideYou should see 2 Pods with status “Running.” Note the pod names, the node they are running on, and their IP addresses.
-
Test the Service
Terminal window curl http://localhost:30080You should see the default nginx welcome page (“Welcome to nginx!”). You can also access it from your laptop browser at
http://<your-ec2-public-ip>:30080(if your Security Group allows port 30080).
Adding a ConfigMap
Section titled “Adding a ConfigMap”A ConfigMap lets you inject configuration into your containers without modifying the image. You will create a custom index.html and mount it into the nginx containers.
-
Create the ConfigMap
Terminal window vim nginx-configmap.yamlapiVersion: v1kind: ConfigMapmetadata:name: nginx-htmldata:index.html: |<!DOCTYPE html><html><head><title>CS 312 Lab</title></head><body><h1>Hello from Kubernetes!</h1><p>Name: YOUR_NAME_HERE</p><p>Date: TODAY_DATE_HERE</p></body></html>Replace
YOUR_NAME_HEREandTODAY_DATE_HEREwith your actual name and today’s date. -
Update the Deployment to mount the ConfigMap
Edit
nginx-deployment.yamlto add a volume and volume mount:apiVersion: apps/v1kind: Deploymentmetadata:name: nginxlabels:app: nginxspec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.27ports:- containerPort: 80volumeMounts:- name: html-volumemountPath: /usr/share/nginx/htmlvolumes:- name: html-volumeconfigMap:name: nginx-htmlThe
volumeMountssection mounts the ConfigMap into the container at/usr/share/nginx/html, which is where nginx looks for files to serve. Thevolumessection declares that the volume comes from thenginx-htmlConfigMap. -
Apply the ConfigMap and updated Deployment
Terminal window kubectl apply -f nginx-configmap.yaml -f nginx-deployment.yamlKubernetes will perform a rolling update, replacing the old Pods with new ones that have the ConfigMap mounted.
-
Verify the custom page
Terminal window curl http://localhost:30080You should see your custom HTML with your name and today’s date. Record this output for your lab questions.
Testing Self-Healing
Section titled “Testing Self-Healing”One of Kubernetes’ most important features is that it automatically replaces failed Pods to maintain the desired number of replicas.
-
List the running Pods
Terminal window kubectl get podsNote the names and AGE of both Pods.
-
Delete one Pod
Terminal window kubectl delete pod <pod-name>Replace
<pod-name>with one of the actual Pod names from the list. -
Watch the replacement
Immediately run:
Terminal window kubectl get podsYou should see one Pod still running (the one you did not delete) and a new Pod being created (with a different name and a very recent AGE, like “3s”). The Deployment’s control loop detected that the actual state (1 Pod) did not match the desired state (2 Pods) and created a replacement.
-
Describe the Service endpoints
Terminal window kubectl describe service nginx-serviceFind the “Endpoints” line. This shows the IP addresses of all Pods that the Service routes traffic to. Compare these IPs to the Pod IPs from
kubectl get pods -o wide; they should match. When a Pod is replaced, its endpoint is automatically updated.
Clean Up
Section titled “Clean Up”If you plan to continue with Labs 8 and 9, leave k3s and the nginx deployment running. Otherwise:
kubectl delete -f nginx-deployment.yaml -f nginx-service.yaml -f nginx-configmap.yamlYou now understand the core Kubernetes primitives: Deployments manage replica sets of Pods, Services provide stable networking, and ConfigMaps separate configuration from images. Most importantly, you experienced the self-healing control loop: Kubernetes automatically corrects drift between desired and actual state. In the next lab, you will add health checks, resource controls, and practice failure drills.