How to Deploy a Kubernetes Cluster Using GitHub Actions on InMotion Cloud
Step-by-step guide to provisioning a Magnum Kubernetes cluster on InMotion Cloud using a pre-built GitHub Actions repository, with automatic HTTPS and WordPress or Drupal deployment.

Introduction
Deploying Kubernetes on a cloud platform typically requires command-line expertise, infrastructure-as-code tooling, and deep knowledge of networking and container orchestration. This guide uses a pre-built GitHub repository that automates the entire workflow using GitHub Actions — from cluster creation to application deployment.
This guide walks you through forking the repository, configuring your credentials, provisioning a full Kubernetes cluster on OpenStack Magnum (the container orchestration engine built into InMotion Cloud) with automatic HTTPS via Traefik (a reverse proxy and ingress controller), and deploying your first WordPress or Drupal site. It also documents the working autoscaling proof of concept used internally by InMotion Cloud teams. All operations run through the GitHub Actions UI — no SSH or local tool installation required.
Before getting started: This workflow requires specific configuration on your InMotion Cloud project by our cloud administration team. If you are interested in deploying a GitHub Actions managed Kubernetes cluster, please submit a ticket in your account dashboard or contact us at support@inmotionhosting.com so our team can prepare your environment and provide the required credentials. Once your project is configured, follow the steps below.
Prerequisites
Before you begin, confirm you have the following:
- A GitHub account (free tier works)
- An InMotion Cloud account with an active project that has been configured for Kubernetes by InMotion Cloud support
- Credentials provided by your InMotion Cloud administrator (see the credential list below)
- A DNS zone you control where you can create wildcard records
- An email address for Let's Encrypt certificate registration
Your cloud administrator will provide the following credentials and values. If you have not received them yet, contact InMotion Cloud support and request the Kubernetes demo setup package.
Secrets (sensitive values your admin generates):
OS_APPLICATION_CREDENTIAL_ID— OpenStack application credential IDOS_APPLICATION_CREDENTIAL_SECRET— OpenStack application credential secretTF_STATE_S3_ACCESS_KEY_ID— S3 API access key for Terraform state storageTF_STATE_S3_SECRET_ACCESS_KEY— S3 API secret key for Terraform state storage
Variables (non-sensitive configuration your admin provides):
OS_AUTH_URL— OpenStack Keystone authentication URLOS_REGION_NAME— Region nameOS_INTERFACE— Endpoint interface (typicallypublic)OS_IDENTITY_API_VERSION— API version (typically3)OS_AUTH_TYPE— Authentication type (typicallyv3applicationcredential)TF_STATE_S3_BUCKET— Object storage container for Terraform stateTF_STATE_S3_ENDPOINT— S3 API endpoint URLCLUSTER_TEMPLATE_NAME— Magnum Kubernetes cluster template name
Step 1: Fork the Repository
- Open the InMotion Cloud Kubernetes demo repository on GitHub: inmotioncloud/k8s-public-demo
- Click the Fork button in the top-right corner
- Select your GitHub account or organization as the destination
- Wait for the fork to complete
After forking, navigate to your copy of the repository and confirm you can see the Actions tab in the top navigation bar. If Actions is disabled, go to Settings > Actions > General and select Allow all actions and reusable workflows.
Step 2: Configure GitHub Secrets
GitHub Secrets store sensitive values that are masked in workflow logs. You need to add the credentials your administrator provided.
- In your forked repository, navigate to Settings > Secrets and variables > Actions
- Click the Secrets tab
- Click New repository secret for each of the following:
| Secret Name | Value |
|---|---|
| `OS_APPLICATION_CREDENTIAL_ID` | Application credential ID from your admin |
| `OS_APPLICATION_CREDENTIAL_SECRET` | Application credential secret from your admin |
| `TF_STATE_S3_ACCESS_KEY_ID` | S3 access key from your admin |
| `TF_STATE_S3_SECRET_ACCESS_KEY` | S3 secret key from your admin |
| `DEMO_DB_PASSWORD` | A strong password you choose (for site databases) |
| `DEMO_DB_ROOT_PASSWORD` | A strong password you choose (for database root access) |
When pasting values, verify there are no trailing spaces or newlines. Copy the exact value without extra whitespace.
Important: DEMO_DB_PASSWORD and DEMO_DB_ROOT_PASSWORD are passwords you create yourself. Use strong, unique values with at least 16 characters. Do not leave these empty — blank values will cause database deployment failures.
If your administrator indicated that Magnum trust authentication is required, also add:
| Secret Name | Value |
|---|---|
| `TERRAFORM_OPENSTACK_USERNAME` | Service account username from your admin |
| `TERRAFORM_OPENSTACK_PASSWORD` | Service account password from your admin |
Step 3: Configure GitHub Variables
GitHub Variables store non-sensitive configuration values that appear in workflow logs.
- In the same Settings > Secrets and variables > Actions page, click the Variables tab
- Click New repository variable for each of the following:
| Variable Name | Value |
|---|---|
| `OS_AUTH_URL` | Keystone URL from your admin |
| `OS_REGION_NAME` | Region name from your admin |
| `OS_INTERFACE` | Endpoint interface from your admin (usually `public`) |
| `OS_IDENTITY_API_VERSION` | API version from your admin (usually `3`) |
| `OS_AUTH_TYPE` | Auth type from your admin (usually `v3applicationcredential`) |
| `LETSENCRYPT_EMAIL` | Your email address (for certificate notifications) |
| `TF_STATE_S3_BUCKET` | Bucket name from your admin |
| `TF_STATE_S3_ENDPOINT` | S3 endpoint URL from your admin |
| `CLUSTER_TEMPLATE_NAME` | Template name from your admin |
| `DEMO_DOMAIN_BASE` | Your DNS domain (e.g., `k8sdemo.yourdomain.com`) |
Optional variables (set only if your admin instructs you to):
| Variable Name | When to Set |
|---|---|
| `CLUSTER_NAME` | If using a custom cluster name (default: `vpc-demo-cluster`) |
| `TERRAFORM_OPENSTACK_PROJECT_ID` | If using Magnum trust authentication |
| `EXISTING_ROUTER_ID` | If your admin pre-created a router |
| `EXISTING_NETWORK_ID` | If your admin pre-created a network |
| `TRAEFIK_SERVICE_TYPE` | Set to `NodePort` only if your admin instructs you to |
Do not set OS_PROJECT_ID as a variable when using application credentials. This commonly causes authentication failures.
Step 4: Validate Your Configuration
Before provisioning infrastructure, run the validation workflow to catch configuration errors.
- Navigate to the Actions tab in your repository
- In the left sidebar, click 01 - Validate configuration
- Click Run workflow
- Set scenario to
provision(validates all credentials needed for cluster creation) - Leave Attempt a real OpenStack token checked
- Click Run workflow
The workflow checks that all required secrets and variables are present, validates formatting, and attempts to authenticate with OpenStack.
If validation fails: Open the workflow run and read the error messages. Common issues include:
- Missing secrets or variables — add the missing values and re-run
- Whitespace in UUID values — re-paste the value without trailing spaces
- Authentication failure — verify the credential values match what your admin provided
Re-run the validation after fixing any reported issues. Do not proceed to cluster provisioning until validation passes.
Step 5: Provision the Kubernetes Cluster
This step creates your cloud infrastructure: a virtual network, a Kubernetes cluster with worker nodes, and the Traefik ingress controller with automatic HTTPS certificates.
- Navigate to the Actions tab
- Click 02 - Provision Cluster
- Click Run workflow (no inputs needed)
- Click Run workflow to confirm
This workflow takes 1-3 hours to complete. It creates OpenStack networking resources, provisions a Magnum Kubernetes cluster, waits for the cluster to become ready, and installs the Traefik ingress controller with Let's Encrypt certificates.
You can monitor progress by clicking on the running workflow and expanding the provision job. Key steps to watch:
- Terraform apply — Creates the network, subnet, router, and cluster
- Wait for Magnum cluster — Polls until the cluster reaches
CREATE_COMPLETE - Wait for Kubernetes nodes Ready — Confirms worker nodes are operational
- Install Traefik — Deploys the ingress controller and waits for its load balancer
When the workflow completes successfully, open the Show Traefik service step in the log output. You need the EXTERNAL-IP value from this step for DNS configuration.
Step 6: Configure DNS
After provisioning, you need to point your domain at the cluster's load balancer.
- From the completed provision workflow, note the Traefik
EXTERNAL-IP(found in the Show Traefik service log step) - Log in to your DNS provider
- Create a wildcard A record:
| Record | Type | Value |
|---|---|---|
| `*.<your DEMO_DOMAIN_BASE>` | A | The Traefik `EXTERNAL-IP` |
For example, if DEMO_DOMAIN_BASE is k8sdemo.yourdomain.com and the Traefik IP is 203.0.113.50:
| Record | Type | Value |
|---|---|---|
| `*.k8sdemo.yourdomain.com` | A | `203.0.113.50` |
This single wildcard record covers all subdomains that the demo creates, including WordPress sites, Drupal sites, and optional dashboards.
DNS propagation typically takes a few minutes to a few hours depending on your provider and TTL settings. You can verify propagation by running:
1nslookup test.k8sdemo.yourdomain.com
The response should resolve to your Traefik load balancer IP.
Step 7: Deploy a WordPress or Drupal Site
With the cluster running and DNS configured, deploy your first application.
- Navigate to Actions > 03 - Deploy Single Site
- Click Run workflow
- Fill in the inputs:
- subdomain: A name for this site (e.g.,
demo). This becomes the first part of the URL. Use lowercase letters, numbers, and hyphens only. - app_type: Choose
wordpressordrupal - admin_username: The admin login name (default:
admin) - admin_password: Choose a password for the application admin account
- Click Run workflow
The workflow installs a MariaDB database, deploys the application, configures the Traefik ingress route with a Let's Encrypt certificate, and bootstraps the admin account.
When the workflow completes, your site is accessible at:
- WordPress:
https://<subdomain>.wordpress.<DEMO_DOMAIN_BASE>/ - Drupal:
https://<subdomain>.drupal.<DEMO_DOMAIN_BASE>/
For example, with subdomain demo and domain base k8sdemo.yourdomain.com:
- WordPress:
https://demo.wordpress.k8sdemo.yourdomain.com/ - Drupal:
https://demo.drupal.k8sdemo.yourdomain.com/
Log in to the admin panel using the credentials you specified in the workflow:
- WordPress admin:
https://demo.wordpress.k8sdemo.yourdomain.com/wp-admin/ - Drupal admin:
https://demo.drupal.k8sdemo.yourdomain.com/user/login
You can deploy additional sites by running the workflow again with different subdomain values. Each site is isolated in its own Kubernetes namespace.
Step 8: Run the Autoscaling Proof of Concept
This repository includes a tested autoscaling flow that demonstrates how InMotion Cloud scales Kubernetes worker capacity up and down based on demand. In this proof of concept, scaling is achieved at the cluster node level while traffic is generated by creating multiple application workloads.
Component interaction model
The autoscaling workflow depends on five components working together:
- GitHub Actions workflows orchestrate provisioning, burst-load generation, and teardown.
- Terraform + OpenStack Magnum define and manage cluster infrastructure and node groups.
- Kubernetes cluster-autoscaler watches unschedulable pods and requests additional worker nodes from Magnum.
- Application workloads (WordPress/Drupal namespaces) generate real CPU/memory scheduling pressure.
- Traefik ingress + DNS continue serving traffic as new nodes join and existing nodes drain.
When workload demand exceeds current worker capacity, pods become pending, the autoscaler requests additional nodes, and Magnum adds workers. When demand drops and nodes are underutilized for the configured cooldown window, workloads are rescheduled and surplus workers are removed.
How auto-scaling works on InMotion Cloud
The auto-scaling mechanism operates through a chain of Kubernetes and OpenStack components. Understanding this chain is useful when tuning behavior, debugging scaling events, or explaining the architecture to stakeholders.
Scale-up sequence:
- You deploy workloads (WordPress, Drupal, or custom applications) that request CPU and memory resources
- The Kubernetes scheduler attempts to place pods on existing worker nodes
- When no node has sufficient available resources, pods enter
Pendingstate - The cluster-autoscaler (deployed in the
kube-systemnamespace) polls for unschedulable pods every 10 seconds - The autoscaler calculates how many additional nodes are needed to satisfy pending resource requests
- It calls the OpenStack Magnum API to resize the cluster's node group
- Magnum provisions new worker instances through Nova (OpenStack compute) using the cluster template's flavor and image
- Each new instance boots, installs the kubelet, and registers with the Kubernetes control plane
- Once the new node reaches
Readystatus, the scheduler places pending pods on it - Traefik automatically routes traffic to the newly scheduled application pods
Scale-down sequence:
- Workloads are removed (sites destroyed or replicas reduced), freeing CPU and memory on worker nodes
- The cluster-autoscaler identifies nodes where resource utilization falls below the configured threshold (default: 50%)
- The autoscaler waits for the scale-down delay (configurable, default: 10 minutes) to prevent flapping
- After the cooldown expires, the autoscaler cordons the target node (prevents new pod scheduling)
- It drains the node, gracefully evicting remaining pods so they reschedule onto other workers
- The autoscaler calls Magnum to remove the node from the cluster
- Magnum deletes the underlying Nova instance, releasing compute resources back to the project quota
- The cluster stabilizes at a lower node count that still satisfies current workload demand
Key configurable parameters:
The Scaling - Tune cluster autoscaler workflow in the repository lets you adjust these values through the GitHub Actions UI — no SSH or kubectl access required:
| Parameter | Default | Purpose |
|---|---|---|
| `--scale-down-delay-after-add` | 10m | Wait time after a scale-up event before evaluating scale-down |
| `--scale-down-unneeded-time` | 10m | How long a node must be underutilized before removal |
| `--scale-down-utilization-threshold` | 0.5 | Node utilization below this ratio triggers scale-down evaluation |
| `--max-node-provision-time` | 15m | Maximum time to wait for a new node to become ready |
| `--skip-nodes-with-system-pods` | true | Protect nodes running kube-system pods from scale-down |
For customer demonstrations, shorter delays (2-3 minutes) make the scaling behavior visible in real time. For production workloads, longer delays (10-15 minutes) reduce unnecessary churn and control infrastructure costs.
Execute scale-up
- Navigate to Actions > Scaling - Burst Up
- Run the workflow with default values first
- Monitor the workflow logs for workload creation and pending pods
- Confirm node growth in the Kubernetes cluster as autoscaler events trigger
Expected behavior:
- Initial worker count is stable during idle state
- Pending pods appear during burst deployment
- Worker node count increases until pods are schedulable
- Application endpoints remain reachable through Traefik
Execute scale-down
- Remove burst workloads by running Actions > Site - Destroy All (or remove selected demo sites)
- Wait for workload pressure to drop and pods to terminate
- Allow autoscaler cooldown timers to expire
- Confirm the worker node count returns toward the baseline configuration
Expected behavior:
- No unschedulable pods remain
- Underutilized worker nodes are selected for scale-down
- Nodes drain cleanly before deletion
- Cluster returns to a lower steady-state capacity
Verify Successful Deployment
After deploying a site, confirm everything is working:
- Access the site URL in your browser. You should see the default WordPress or Drupal page served over HTTPS with a valid Let's Encrypt certificate
- Check the certificate: Click the lock icon in your browser's address bar. The certificate should be issued by "Let's Encrypt" (not a staging or self-signed certificate)
- Log in to the admin panel using the credentials from the deploy workflow. Confirm you can access the dashboard and create content
- Deploy a second site with a different subdomain to verify multi-site support
If any step fails, check the workflow run logs for error messages and refer to the troubleshooting section below.
Verify Autoscaling from Horizon First, Then CLI
Use this sequence when validating autoscaling behavior for customer conversations or internal demonstrations.
Horizon validation (first)
- Navigate to Project > Container Infrastructure > Clusters
- Open your cluster and confirm node count changes after running Scaling - Burst Up
- Navigate to Project > Compute > Instances
- Verify additional worker instances appear during scale-up and are later removed after scale-down
CLI validation (second)
After Horizon confirms behavior, validate from CLI for precise operational checks:
1kubectl get nodes -o wide2kubectl get pods -A --field-selector=status.phase=Pending3kubectl describe deployment cluster-autoscaler -n kube-system
Use these checks to confirm that:
- New worker nodes join with
Readystatus during burst traffic - Pending workloads clear as capacity increases
- Node count contracts after workloads are removed and cooldown passes
Optional: Explore Additional Workflows
The repository includes several additional workflows for managing your cluster:
| Workflow | Purpose |
|---|---|
| **Scaling - Burst Up** | Deploy many Drupal sites simultaneously to demonstrate cluster autoscaling. The cluster automatically adds worker nodes when demand exceeds capacity. |
| **Scaling - Tune cluster autoscaler** | Adjust how quickly the cluster adds and removes nodes. Useful for demo timing. |
| **Dashboards - Toggle Traefik** | Enable or disable the Traefik web dashboard (shows routing and certificate status). Unauthenticated — disable after your demo. |
| **Dashboards - Deploy Headlamp** | Deploy the Headlamp Kubernetes dashboard for a visual view of cluster resources. |
| **Site - Destroy** | Remove a single deployed site by specifying its subdomain and application type. |
| **Site - Destroy All** | Remove all deployed sites at once (requires confirmation). |
For autoscaling demonstrations, start with Scaling - Burst Up, then use Site - Destroy All to trigger scale-down. This sequence matches the validated proof-of-concept runbook used internally.
Optional: Access kubectl Locally
For advanced users who want direct Kubernetes access from their workstation:
- Install the OpenStack CLI and kubectl
- Configure your OpenStack credentials (application credential environment variables)
- Fetch the cluster kubeconfig:
1mkdir -p ~/.kube2openstack coe cluster config vpc-demo-cluster --dir ~/.kube3export KUBECONFIG="$HOME/.kube/config"
- Verify connectivity:
1kubectl get nodes
Replace vpc-demo-cluster with your CLUSTER_NAME if you set a custom value.
Clean Up Resources
When you are finished with the demo, remove cloud resources to stop incurring charges.
Remove individual sites:
- Navigate to Actions > 04 - Site Destroy
- Enter the subdomain and application type of the site to remove
- Run the workflow
Remove all sites at once:
- Navigate to Actions > 10 - Site Destroy All
- Change the confirmation dropdown from
canceltodelete-all-customer-sites-confirm - Run the workflow
Destroy the entire cluster and networking infrastructure:
- Navigate to Actions > 11 - Destroy Full Cluster
- Change the confirmation dropdown from
canceltodestroy-infra-confirm - Run the workflow
This removes the Kubernetes cluster, load balancer, network, subnet, and router from your InMotion Cloud project. The Terraform state file remains in object storage but the infrastructure it references is deleted.
Important: Cluster destruction is irreversible. All data stored in the cluster (site databases, uploaded content) is permanently deleted.
Troubleshooting Common Issues
Provision workflow fails at "Wait for Magnum cluster"
The Magnum cluster did not reach CREATE_COMPLETE within the timeout. Common causes:
- Quota exceeded: Your project does not have enough compute, network, or storage quota for the cluster. Contact your InMotion Cloud administrator to increase quotas.
- Image not found: The Magnum template references a Glance image that is not visible to your project. Ask your administrator to verify the template and image configuration.
- Network issue: The cluster nodes cannot reach required external endpoints. Verify that the external network and router are functioning.
After resolving the issue, run Destroy Full Cluster (if Terraform created partial resources) and then re-run Provision Cluster.
Traefik EXTERNAL-IP shows "Pending"
The load balancer has not been assigned an IP address. This can happen when:
- The cloud's load balancer service (Octavia) is under heavy load or misconfigured
- There are no available floating IPs in the external network
Workaround: Set the variable TRAEFIK_SERVICE_TYPE to NodePort and re-run the provision workflow. This bypasses the load balancer and uses worker node IPs directly. Your administrator can provide the node IPs for DNS configuration.
Let's Encrypt certificate errors
If your browser shows an untrusted certificate:
- Check if the variable
LETSENCRYPT_USE_STAGINGis set totrue. Staging certificates are intentionally untrusted. Remove this variable and re-provision. - Verify DNS is resolving correctly to the Traefik IP. Let's Encrypt HTTP-01 validation requires that port 80 on the Traefik IP is reachable from the internet and responds correctly.
- Check the Traefik logs in the provision workflow output for ACME-related errors.
Site deploy fails with "password option is not specified"
The DEMO_DB_PASSWORD or DEMO_DB_ROOT_PASSWORD secret is empty or missing. These secrets must contain non-empty values. Add or update them in Settings > Secrets and variables > Actions > Secrets.
Workflow fails with "application_credential is not allowed for managing trusts"
Your cloud environment requires password-based authentication for Kubernetes cluster provisioning. Contact your InMotion Cloud administrator to request:
TERRAFORM_OPENSTACK_USERNAME(add as GitHub Secret)TERRAFORM_OPENSTACK_PASSWORD(add as GitHub Secret)TERRAFORM_OPENSTACK_PROJECT_ID(add as GitHub Variable)
The provision workflow automatically switches to password authentication when these values are present.
Frequently Asked Questions
Do I need to install any software on my computer?
No. All cluster provisioning and site deployment runs inside GitHub Actions. You only need a web browser to configure your repository settings and manage DNS records. If you want optional direct access to the cluster via kubectl, you can install the OpenStack CLI and kubectl locally, but this is not required.
Can I use this cluster for production workloads?
The architecture deployed by this repository — Kubernetes on OpenStack Magnum with Traefik ingress and Let's Encrypt TLS — is production-capable. However, the default configuration is tuned for demonstration purposes. Before running production workloads, you should review and adjust security controls, high-availability settings, backup procedures, and monitoring to match your organization's requirements.
Can I deploy applications other than WordPress and Drupal?
Yes. The included workflows support WordPress and Drupal out of the box, but the cluster itself is a standard Kubernetes environment. You can deploy any containerized application using your own Helm charts or Kubernetes manifests.
How do I add more worker nodes?
The default cluster starts with two worker nodes. If autoscaling is enabled, the cluster automatically adds nodes when resource demand exceeds current capacity (up to a configurable maximum). You can also adjust the base node count by re-running the provision workflow after updating the Terraform configuration.
Can I autoscale a single site based on traffic instead of deploying multiple sites?
The built-in scaling demo works at the cluster level by deploying many sites to trigger node autoscaling. However, because this creates a standard Kubernetes cluster, you can configure Horizontal Pod Autoscaling (HPA) for an individual WordPress or Drupal site. HPA monitors CPU or memory usage and automatically adds or removes application replicas to handle traffic spikes. Traefik, which is already installed as the ingress controller, distributes traffic across all replicas automatically. If you are interested in this configuration, contact InMotion Cloud support and we can assist with the setup.
What happens to my data if I destroy the cluster?
Destroying the cluster permanently removes all Kubernetes resources, including databases and uploaded content. This action is irreversible. Back up any important data before running the destroy workflow. The Terraform state file in object storage is preserved, but the infrastructure it references will be deleted.
How do I point DNS at the cluster?
After the provision workflow completes, open the workflow run log and find the Show Traefik service step. It displays the EXTERNAL-IP of the Traefik load balancer. In your DNS provider, create a wildcard A record — for example, *.k8sdemo.yourdomain.com pointing to that IP address. This single record covers all sites and dashboards the demo creates. See Step 6: Configure DNS for detailed instructions.
What does this cost?
The GitHub repository and GitHub Actions usage are free. The InMotion Cloud resources — compute instances, storage, and load balancer — are billed according to your cloud plan. The default configuration creates a small cluster (one master node and two worker nodes). If you use the autoscaling demo, additional worker nodes are created temporarily and removed when no longer needed.
Engage InMotion Cloud Professional Services
If you want to move from demo-level autoscaling to a production-ready implementation, InMotion Cloud Professional Services can assist with architecture, implementation, and operational hardening.
Professional Services can help you:
- Design cluster autoscaling boundaries — Set minimum and maximum node counts, cooldown policies, and utilization thresholds that match your workload patterns
- Implement workload-level autoscaling — Configure Horizontal Pod Autoscaler (HPA) or KEDA for individual applications so they scale replicas based on CPU, memory, or custom metrics
- Tune node pools and infrastructure — Optimize storage classes, ingress routing, and observability for predictable scaling behavior under production load
- Build operational guardrails — Establish rollout, rollback, and cost-control policies so scaling events are safe and budget-aware
- Integrate monitoring and alerting — Connect scaling events to your existing monitoring stack so your team has visibility into when and why scaling occurs
Whether you need help adapting this proof of concept to your production requirements or want a fully managed Kubernetes autoscaling implementation, InMotion Cloud's team has the infrastructure expertise to deliver it.
To request implementation support, submit a support ticket in your account dashboard or email support@inmotionhosting.com and mention that you want assistance with Kubernetes autoscaling architecture on InMotion Cloud.
Internal Team Talking Points for Customer Discussions
Use these points when discussing autoscaling with customers:
- The documented workflow is based on a working proof of concept that scales up and down on InMotion Cloud infrastructure — this is not a theoretical design.
- Scale-up is triggered by real workload pressure (pending pods), not synthetic static claims. The Kubernetes cluster-autoscaler communicates directly with OpenStack Magnum to provision new worker nodes.
- Scale-down is achieved by removing workload pressure and allowing autoscaler cooldown/drain behavior to complete. Worker nodes are deleted automatically, releasing compute resources.
- All scaling parameters (cooldown timers, utilization thresholds, min/max node counts) are configurable through the GitHub Actions UI without requiring SSH or kubectl access.
- The approach supports both cluster-level scaling (add/remove worker nodes) and workload-level scaling (Horizontal Pod Autoscaler for individual applications). Both can run simultaneously.
- InMotion Cloud support and Professional Services can help adapt the pattern to customer production requirements, including custom scaling policies, monitoring integration, and cost guardrails.
- This article can be referenced as the baseline runbook for technical discovery calls and implementation planning.
Related Resources
For more information, see the official documentation: