Skip to main content
IMHCloud Logo
Back to support home

Booting Instances from Volumes in OpenStack

When you launch a standard OpenStack instance, the root disk is ephemeral: it lives on compute node local storage and disappears permanently when the instance is deleted. For most stateless workloads this is acceptable, but for production systems where root disk persistence matters, boot-from-volume changes the equation entirely.

A volume-backed instance uses a Cinder block storage volume as its root disk instead of ephemeral local storage. The root disk persists independently of the instance's lifecycle, giving you control over data retention that ephemeral storage simply cannot provide.

Ephemeral vs Volume-Backed Instances

Understanding the architectural difference between these two launch modes is essential before choosing one for a workload.

Ephemeral instances are simpler to launch and better suited to stateless compute tasks where the instance is treated as disposable. Volume-backed instances suit stateful workloads where the root filesystem must outlive the compute resource.

When to Use Boot-from-Volume

Choose a volume-backed instance when your workload requires any of the following:

Persistent root disk: If your instance runs a database engine, application server, or any service that writes important state to the root filesystem, you need the root disk to survive instance deletion. Volume-backed instances let you delete the compute resource while retaining the volume, then re-attach it or boot a new instance from it.

Larger root storage than your flavor allows: Flavor disk sizes in OpenStack are fixed. If you need a root disk larger than the flavor's default, boot from a volume sized to your requirement. A flavor with a 20GB disk limit is no longer a constraint when the root is a 200GB Cinder volume.

Faster instance rebuilds from a known state: A volume containing a pre-configured root filesystem can be booted directly without waiting for image deployment to ephemeral storage. For golden image workflows and fast recovery scenarios, this approach reduces rebuild time.

Root disk snapshots as a backup strategy: Cinder volumes support volume-level snapshots that can be triggered at any time, including while the instance is running (with caveats for filesystem consistency). This gives you a dedicated snapshot mechanism for root disk backups.

Portability: A volume-backed root disk can be detached from a failed instance and attached to a new one, enabling recovery workflows that are not possible with ephemeral storage.

How to Create a Boot-from-Volume Instance in Horizon

Prerequisites

Before launching a volume-backed instance, confirm you have:

  • Active OpenStack Horizon dashboard access with Project Member permissions or higher
  • Sufficient volume quota in your project (check under Project > Compute > Overview)
  • An available image to use as the volume source
  • A target availability zone selected (volume and instance must share the same zone)

How to Create a Boot-from-Volume Instance in Horizon Dashboard

  1. Log into the Horizon dashboard and navigate to Project > Compute > Instances
  2. Click Launch Instance to open the launch dialog
  3. On the Details tab, enter an instance name and select your target availability zone
  4. Click Next to proceed to the Source tab
  5. In the Select Boot Source dropdown, choose Image
  6. Set Create New Volume to Yes
  • A Volume Size (GB) field will appear; enter the desired root disk size (must be at least as large as the image's minimum disk requirement)
  • Set Delete Volume on Instance Delete to Yes or No depending on your persistence requirements (see the next section for guidance)
  1. From the image list, click the arrow next to your chosen image to move it to the Allocated column
  2. Click Next and complete the remaining tabs: Flavor, Networks, Security Groups, and Key Pair
  3. Review your selections and click Launch Instance

OpenStack creates the Cinder volume from the image, then boots the instance using that volume as the root disk. The volume appears in Project > Volumes > Volumes with the status "In-use" and the name of your instance associated with it.

How to Create a Boot-from-Volume Instance Using OpenStack CLI

  1. Identify the image ID you want to use as the volume source:
1 openstack image list
  1. Launch the instance with the --boot-from-volume flag, specifying the volume size in gigabytes:
1 openstack server create \
2 --flavor m1.medium \
3 --image IMAGE_ID \
4 --boot-from-volume 50 \
5 --network NETWORK_ID \
6 --key-name MY_KEYPAIR \
7 --security-group default \
8 my-volume-backed-instance

Replace 50 with your desired root volume size in gigabytes, and substitute the actual values for IMAGE_ID, NETWORK_ID, and MY_KEYPAIR.

  1. To prevent the volume from being deleted when the instance is deleted, use the --block-device flag instead for full control over the deletion policy:
1 openstack server create \
2 --flavor m1.medium \
3 --block-device source_type=image,uuid=IMAGE_ID,dest_type=volume,volume_size=50,delete_on_termination=false,boot_index=0 \
4 --network NETWORK_ID \
5 --key-name MY_KEYPAIR \
6 --security-group default \
7 my-persistent-instance
  1. Verify the instance launched successfully and confirm the root volume is attached:
1 openstack server show my-volume-backed-instance

Look for the os-extended-volumes:volumes_attached field in the output, which lists the attached volume IDs.

Understanding the Delete-on-Termination Flag

The Delete Volume on Instance Delete setting is the most consequential configuration choice when launching a volume-backed instance. It controls what happens to the root Cinder volume when the instance is deleted.

Delete on termination: Yes (default in most configurations)

When this flag is enabled, deleting the instance also deletes the root volume. The behavior mirrors ephemeral storage from a data retention perspective: the instance and its root disk are destroyed together. This is appropriate for disposable environments and development workloads where you do not need to retain the root filesystem.

Delete on termination: No

When this flag is disabled, deleting the instance leaves the root volume intact. The volume transitions from "In-use" to "Available" status and remains in your project. You can:

  • Attach it to another instance as a data volume
  • Boot a new instance directly from the volume
  • Take a snapshot and then delete it
  • Inspect the filesystem before permanent deletion

This is the correct setting for production workloads, stateful applications, and any scenario where the root disk content has independent value from the compute resource running it.

Changing the flag after launch is not supported through Horizon. The delete-on-termination policy is set at instance creation and cannot be modified through the dashboard after the fact. If you need to change it, use the OpenStack CLI:

1openstack server set --block-device-mapping vda=VOLUME_ID:::false INSTANCE_ID

Replace VOLUME_ID with the actual volume ID and INSTANCE_ID with the instance ID. The false value disables delete-on-termination. Confirm the volume ID using openstack server show INSTANCE_ID.

Booting a New Instance from an Existing Volume

If you have an existing Cinder volume that contains a bootable root filesystem, you can boot a new instance directly from that volume without re-deploying from an image. This is useful for recovery scenarios and golden image workflows.

In Horizon Dashboard

  1. Navigate to Project > Volumes > Volumes
  2. Confirm the volume is in "Available" status and is marked as bootable
  3. Click the dropdown arrow next to Launch as Instance for the target volume
  4. If the Launch as Instance action is not available, the volume may not be marked as bootable. In this case, click Edit Volume and enable the Bootable checkbox
  5. Complete the instance launch dialog as described above, selecting the volume as the boot source

Using OpenStack CLI

  1. List your available volumes to find the target:
1 openstack volume list --status available
  1. Boot an instance directly from the existing volume:
1 openstack server create \
2 --flavor m1.medium \
3 --block-device source_type=volume,uuid=VOLUME_ID,dest_type=volume,delete_on_termination=false,boot_index=0 \
4 --network NETWORK_ID \
5 --key-name MY_KEYPAIR \
6 my-recovered-instance
  1. Verify the instance is running:
1 openstack server show my-recovered-instance | grep status

Common Issues and Considerations

Volume quota exhaustion: Every boot-from-volume instance consumes Cinder volume quota in addition to compute quota. Monitor your project's volume count and storage capacity limits under Project > Compute > Overview before launching at scale.

Availability zone alignment: The Cinder volume and the compute instance must reside in the same availability zone. If you receive an error during launch, verify that the selected availability zone matches the zone where your volumes are provisioned.

Image minimum disk size: The volume must be at least as large as the image's minimum disk requirement. If you specify a volume size smaller than the image requires, the launch will fail with an error indicating the volume is too small.

Volume not marked as bootable: If you attempt to launch from an existing volume and the option is unavailable, the volume is not flagged as bootable. Enable the bootable flag in the volume edit dialog or via CLI before attempting to boot from it:

1openstack volume set --bootable VOLUME_ID

Next Steps

With volume-backed instances running, the next logical areas to explore include:

  • Volume snapshots: Create consistent point-in-time snapshots of your root volume for backup and cloning workflows
  • Volume backups: Export root volumes to object storage for off-site retention
  • Volume types: Select the right storage tier (SSD vs HDD performance classes) for root disk performance requirements
  • Instance snapshots vs volume snapshots: Understand which snapshot mechanism captures what data for your recovery planning

Boot-from-volume is a foundational capability for running stateful workloads in OpenStack with confidence. Getting the delete-on-termination policy right from the start prevents both accidental data loss and unwanted volume accumulation in your project.