Skip to main content
IMHCloud Logo
Back to support home

How to Deploy Applications with Docker and Kubernetes

Package applications with Docker, run them on a k3s Kubernetes cluster, and turn a single-VM container deployment into a self-healing, scalable platform on InMotion Cloud. Covers the full path from Dockerfile to kubectl, plus when to graduate to multi-node Kubernetes.

9 min read
This article is Step 3 of a 3-part series on running applications on InMotion Cloud. The walkthrough below assumes you have already completed Step 1 (How to Create Cloud Infrastructure with Terraform) and Step 2 (How to Configure Servers with Ansible). The Docker and Kubernetes steps below build on the OpenStack VM that Step 1 provisioned and Step 2 configured — if you skip either, the commands here will not work against your environment. For the high-level tour of the whole workflow, see How Modern Applications Run on Cloud Infrastructure.

Running a container with docker run on a single server is a fine way to demo an application. It is a bad way to run one. When that server reboots, your workload comes back in whatever state systemd restored. When traffic doubles, you have no way to split load across copies. When the container crashes, it stays crashed until a human notices.

Kubernetes exists to solve those problems. It packages containers into a cluster that restarts failed workloads, distributes traffic across replicas, and rolls out new versions without downtime. This guide walks through the last two stages of the modern cloud workflow: packaging an application with Docker and running it on Kubernetes with k3s, a production-grade Kubernetes distribution that installs with a single command.

By the end, the same WordPress deployment that the previous article ran via docker run will be running on k3s — healing itself when pods fail, scaling on demand, and deployable with a single kubectl apply.

What Docker and Kubernetes Do

Docker packages applications. It takes your code, runtime, libraries, and configuration and bundles them into an image — a single artifact that runs the same on a laptop, a staging server, or a production cluster. The image is the unit of software.

Kubernetes runs those images at scale. You describe the desired state — "I want three copies of this image, exposed on port 80, with 2 GB of memory each" — and Kubernetes makes it happen. It schedules containers onto servers, restarts them when they crash, replaces them during upgrades, and spreads traffic across copies.

Together, Docker and Kubernetes turn an application running on a server into an application running on a platform. The platform keeps the application alive without human intervention.

k3s is a conformant Kubernetes distribution from SUSE, stripped down for small clusters and edge deployments. It ships as a single binary, uses a fraction of the memory of upstream Kubernetes, and installs with one command. Everything you learn running k3s transfers directly to full multi-node Kubernetes — the APIs and manifests are identical.

Why docker run on One Server Breaks Down

The previous article deployed WordPress and MariaDB as Docker containers on a single OpenStack VM using Ansible. That deployment works — until something goes wrong.

  • No self-healing. If the WordPress container crashes, nothing restarts it. Users see an error until an engineer logs in and runs docker start.
  • No scaling. A traffic spike floods the single container. There is no way to distribute load across copies without rewriting the deployment.
  • No zero-downtime updates. Releasing a new version means docker stop, docker rm, docker run — a visible window where the site is offline.
  • No declarative state. The running configuration lives in whatever shell commands got executed last. Reproducing it on another VM means re-reading the Ansible playbook and hoping nothing drifted.

Kubernetes addresses all four. The trade-off is one more tool and a handful of YAML files. For any service you want running unattended, that trade is worth it.

Prerequisites

Before starting, you should have:

  • An OpenStack VM provisioned with Terraform (see How to Create Cloud Infrastructure with Terraform) and configured with Docker using Ansible (see How to Configure Servers with Ansible). A single VM with 4 GB of memory, 2 vCPUs, and 20 GB of disk is sufficient for this walkthrough.
  • SSH access to the VM with sudo privileges and an assigned floating IP.
  • A Docker Hub account (free). Any OCI-compatible registry works — GitHub Container Registry, Harbor, or a self-hosted registry — but this guide uses Docker Hub.
  • The docker CLI installed on your local machine for building images. Ansible already installed Docker on the VM itself.

The WordPress and MariaDB containers from the Ansible deployment can stay running. You will replace them with k3s-managed copies as part of this walkthrough.

Step 1: Package the Application with a Dockerfile

Most applications do not need a custom image. WordPress has an official image on Docker Hub that we will use for the main deployment. But packaging your own code is a core Docker skill, so this step walks through the pattern using a small companion service — a landing page that could sit in front of WordPress.

On your local machine, create a project directory:

1mkdir wordpress-landing
2cd wordpress-landing

Create a file named index.html:

1<!DOCTYPE html>
2<html>
3 <head><title>Coming soon</title></head>
4 <body>
5 <h1>Site under construction</h1>
6 <p>Powered by InMotion Cloud.</p>
7 </body>
8</html>

Create a file named Dockerfile:

1FROM nginx:alpine
2COPY index.html /usr/share/nginx/html/index.html
3EXPOSE 80

Three instructions: start from the official Nginx image, copy the landing page into place, and document the port the container listens on. That is a complete Dockerfile.

Build the image:

1docker build -t YOURDOCKERHUBUSER/wordpress-landing:1.0 .

Replace YOURDOCKERHUBUSER with your Docker Hub username. The tag 1.0 is a version label — real deployments should version every build.

Verify the image exists locally:

1docker images | grep wordpress-landing

Step 2: Push the Image to a Registry

Kubernetes pulls images from a registry. It does not read them from your local machine. Log into Docker Hub:

1docker login

Push the image:

1docker push YOURDOCKERHUBUSER/wordpress-landing:1.0

The push streams the image layers to Docker Hub and prints a digest when complete. The image is now fetchable by any cluster that can reach Docker Hub. For private images, create a private repository and reference it with an imagePullSecret in Kubernetes.

Step 3: Install k3s on Your OpenStack VM

SSH into the VM. k3s provides a one-line installer that downloads the binary, installs it as a systemd service, and starts a single-node cluster:

1curl -sfL https://get.k3s.io | sh -

The installer writes its kubeconfig to /etc/rancher/k3s/k3s.yaml. Make it readable by your user and export it:

1sudo chmod 644 /etc/rancher/k3s/k3s.yaml
2export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Verify the cluster is running:

1kubectl get nodes

Expected output:

1NAME STATUS ROLES AGE VERSION
2wordpress-host-1 Ready control-plane,master 30s v1.x.x+k3s1

One node with the control-plane role is the k3s default — a single VM that is both scheduler and worker. That is the simplest possible Kubernetes cluster.

Firewall note: If the previous Ansible playbook configured UFW on the VM, allow the NodePort range before deploying the application:

1sudo ufw allow 30000:32767/tcp

Step 4: Deploy WordPress to k3s

Stop the old docker run containers first so they do not compete for the floating IP:

1sudo docker stop wordpress mariadb
2sudo docker rm wordpress mariadb

On your local machine, create a file named wordpress.yaml:

1apiVersion: v1
2kind: Secret
3metadata:
4 name: mysql-pass
5type: Opaque
6stringData:
7 password: changeme-pick-a-real-password
8---
9apiVersion: v1
10kind: PersistentVolumeClaim
11metadata:
12 name: mysql-pv-claim
13spec:
14 accessModes: [ReadWriteOnce]
15 resources:
16 requests:
17 storage: 5Gi
18---
19apiVersion: v1
20kind: PersistentVolumeClaim
21metadata:
22 name: wp-pv-claim
23spec:
24 accessModes: [ReadWriteOnce]
25 resources:
26 requests:
27 storage: 5Gi
28---
29apiVersion: apps/v1
30kind: Deployment
31metadata:
32 name: mariadb
33spec:
34 replicas: 1
35 selector:
36 matchLabels: { app: mariadb }
37 template:
38 metadata:
39 labels: { app: mariadb }
40 spec:
41 containers:
42 - name: mariadb
43 image: mariadb
44 env:
45 - name: MYSQL_ROOT_PASSWORD
46 valueFrom: { secretKeyRef: { name: mysql-pass, key: password } }
47 - name: MYSQL_DATABASE
48 value: wordpress
49 ports:
50 - containerPort: 3306
51 volumeMounts:
52 - { name: data, mountPath: /var/lib/mysql }
53 volumes:
54 - name: data
55 persistentVolumeClaim: { claimName: mysql-pv-claim }
56---
57apiVersion: v1
58kind: Service
59metadata:
60 name: mariadb
61spec:
62 selector: { app: mariadb }
63 ports:
64 - { port: 3306 }
65---
66apiVersion: apps/v1
67kind: Deployment
68metadata:
69 name: wordpress
70spec:
71 replicas: 1
72 selector:
73 matchLabels: { app: wordpress }
74 template:
75 metadata:
76 labels: { app: wordpress }
77 spec:
78 containers:
79 - name: wordpress
80 image: wordpress
81 env:
82 - name: WORDPRESS_DB_HOST
83 value: mariadb
84 - name: WORDPRESS_DB_USER
85 value: root
86 - name: WORDPRESS_DB_NAME
87 value: wordpress
88 - name: WORDPRESS_DB_PASSWORD
89 valueFrom: { secretKeyRef: { name: mysql-pass, key: password } }
90 ports:
91 - containerPort: 80
92 volumeMounts:
93 - { name: content, mountPath: /var/www/html }
94 volumes:
95 - name: content
96 persistentVolumeClaim: { claimName: wp-pv-claim }
97---
98apiVersion: v1
99kind: Service
100metadata:
101 name: wordpress
102spec:
103 type: NodePort
104 selector: { app: wordpress }
105 ports:
106 - { port: 80, nodePort: 30080 }

The manifest declares seven resources:

  • Secret stores the database password so it is not hard-coded in the YAML.
  • Two PersistentVolumeClaims request disk space for database files and WordPress uploads. k3s ships with a default storage class that provisions these automatically using the VM's local disk.
  • MariaDB Deployment and Service run the database and expose it at a stable cluster-internal DNS name (mariadb).
  • WordPress Deployment and Service run WordPress and expose it on port 30080 of the host VM.

Copy the manifest to the VM and apply it:

1scp wordpress.yaml ubuntu@<vm-floating-ip>:~
2ssh ubuntu@<vm-floating-ip>
3kubectl apply -f wordpress.yaml

Watch the pods come up:

1kubectl get pods -w

Within a minute the output should settle at:

1NAME READY STATUS RESTARTS AGE
2mariadb-xxxxxxxxx-xxxxx 1/1 Running 0 45s
3wordpress-xxxxxxxxx-xxxxx 1/1 Running 0 45s

Press Ctrl+C to stop watching. Open http://<vm-floating-ip>:30080 in a browser and WordPress appears — exactly as it did in the Ansible article, now served through the k3s Service.

The page is rendered by a pod that Kubernetes placed on the single-node cluster. From the user's perspective nothing has changed. What has changed is what happens when that pod fails or traffic grows, which is the subject of the next step.

Step 5: Watch k3s Heal and Scale

This is where Kubernetes earns its reputation.

Self-healing

Delete the running WordPress pod on purpose and watch Kubernetes replace it:

1kubectl get pods
2kubectl delete pod -l app=wordpress
3kubectl get pods

The first command lists one WordPress pod with a random name suffix. The second deletes it. The third, run seconds later, shows a brand-new pod already starting — a different name suffix, status ContainerCreating or Running. The Deployment noticed the target replica count dropped to zero and created a replacement automatically. With docker run, a human would have had to start the container manually.

Scaling

Increase the WordPress replica count to three:

1kubectl get pods
2kubectl scale deployment wordpress --replicas=3
3kubectl get pods

Kubernetes creates two additional WordPress pods. The Service load-balances across all three — every request to port 30080 now lands on one of the running pods, chosen round-robin.

Scale back down when done:

1kubectl scale deployment wordpress --replicas=1
Note on WordPress and shared storage. WordPress stores uploads on local disk, and the default ReadWriteOnce volume used above cannot be attached to more than one pod at a time. The three-replica demo shows the scaling mechanism working, but production WordPress deployments across multiple replicas need a ReadWriteMany volume (NFS, for example) for /var/www/html. Stateless services scale without this complication.

Zero-downtime updates

Update the WordPress image to a newer tag:

1kubectl set image deployment/wordpress wordpress=wordpress:<new-tag>
2kubectl rollout status deployment/wordpress

Kubernetes launches a new pod with the updated image, waits for it to become ready, then terminates the old one. Traffic keeps flowing the entire time. The same mechanism applies to any image change — a custom application image, a security-patched base, or a sidecar update.

When to Graduate from k3s to Full Kubernetes

k3s is a real Kubernetes cluster. Every manifest you wrote above — Deployments, Services, Secrets, PVCs — works identically on a multi-node upstream Kubernetes cluster. The only reason to graduate is resilience.

A single-node k3s cluster has the same weakness as the docker run deployment it replaced: if the underlying VM fails, the cluster fails with it. Production workloads need multiple nodes so that the loss of any single VM does not take down the application.

The standard path to production Kubernetes on OpenStack is kubeadm running on three or more Terraform-provisioned VMs. kubeadm bootstraps a multi-node control plane and worker pool that tolerates node failures. The manifests you wrote for k3s apply to that cluster without modification — moving from k3s to kubeadm is a cluster change, not an application change.

For teams just adopting containers, a single-node k3s cluster is enough to learn the platform, refine manifests, and run low-stakes internal services. Graduate to multi-node kubeadm once an application needs to survive the loss of a VM.

How This Fits the Bigger Picture

Packaging and orchestration are the last two stages of the modern cloud workflow. Terraform builds the infrastructure, Ansible configures it, Docker packages the application, and Kubernetes keeps it running. See How Modern Applications Run on Cloud Infrastructure for the full picture of how these tools connect.

The pattern that this walkthrough demonstrates — image, manifest, apply — is the same one every team running containers at scale uses. At larger scale the number of nodes grows, the manifests get organized into Helm charts, and a CI/CD pipeline drives kubectl apply on every commit. The fundamentals do not change.

Every piece of this deployment is defined in code: the VM in Terraform, the host configuration in Ansible, the image in a Dockerfile, and the running state in Kubernetes YAML. Store all four in a Git repository and your entire stack — from bare infrastructure to running application — becomes version-controlled, reviewable, and reproducible.

Where to Go from Here

When your team is ready to adopt this workflow on infrastructure with transparent, predictable pricing, contact the InMotion Cloud team.