Skip to main content
IMHCloud Logo
Back to support home

Soft Affinity vs Hard Affinity: Choosing the Right Policy in OpenStack

Understanding Server Group Placement Policies in OpenStack

When deploying instances in OpenStack, you can use server groups to control how Nova's scheduler places your virtual machines across physical compute nodes. Server groups support four placement policies: affinity, anti-affinity, soft-affinity, and soft-anti-affinity. Understanding the differences between hard and soft policies is essential for balancing application performance, availability, and resource utilization.

This guide compares hard and soft affinity policies, explains their trade-offs, and helps you choose the right policy for your workload requirements.

What Are Server Groups?

A server group is an OpenStack construct that defines scheduling rules for a collection of instances. When you create instances within a server group, Nova's filter scheduler applies the group's policy to determine which physical hosts should run those instances.

Server groups address two common deployment patterns:

Affinity places instances on the same physical host to minimize network latency and maximize throughput between tightly coupled application components.

Anti-affinity spreads instances across different physical hosts to improve fault tolerance and ensure that a single hardware failure does not impact all instances in the group.

Hard Affinity Policies

Hard affinity policies enforce strict placement rules. The scheduler will fail to create an instance if it cannot satisfy the policy requirements.

Affinity Policy

A server group with the affinity policy ensures that all servers in that group are always placed on the same physical compute node. If the scheduler cannot place a new instance on the host where other group members are running, the scheduling operation fails.

Use cases for hard affinity:

You run database replication with extremely high throughput requirements between primary and replica nodes.

Your application components exchange large amounts of data and require minimal network latency.

You need guaranteed co-location for licensing or compliance reasons.

Anti-Affinity Policy

A server group with the anti-affinity policy ensures that servers in that group are never placed on the same physical compute node. If the scheduler cannot find enough distinct hosts to honor the policy, the scheduling operation fails.

Use cases for hard anti-affinity:

You deploy highly available services where losing multiple instances to a single hardware failure is unacceptable.

Your application requires strict separation for security or compliance reasons.

You need guaranteed fault isolation across availability zones or racks.

Soft Affinity Policies

Soft affinity policies provide best-effort placement. The scheduler prefers to honor the policy but will allow violations if resource constraints make strict enforcement impossible.

Soft-Affinity Policy

A server group with the soft-affinity policy tries to place all servers on the same physical compute node. If co-location is not possible due to capacity constraints or other filters, the scheduler places the instance on the next best available host based on weights.

Nova's AffinityWeight sorts available hosts in descending order according to the number of instances from the same server group already running on each host. This weight guides the scheduler to prefer hosts with more group members.

Use cases for soft-affinity:

You want to minimize latency between instances but need flexibility when cluster capacity is limited.

Your application benefits from co-location but can tolerate occasional instances being placed on different hosts.

You need guaranteed instance creation even during resource constraints.

Soft-Anti-Affinity Policy

A server group with the soft-anti-affinity policy tries to spread servers across different physical compute nodes. If sufficient distinct hosts are unavailable, the scheduler allows multiple instances from the group to share a host.

Nova's AntiAffinityWeight sorts available hosts in ascending order according to the number of instances from the same server group already running on each host. This weight guides the scheduler to prefer hosts with fewer group members.

Use cases for soft-anti-affinity:

You want to improve availability by spreading instances but need operational flexibility.

Your application can tolerate occasional co-location of instances during capacity constraints.

You prioritize successful instance creation over strict fault isolation.

When to Use Hard Affinity

Choose affinity policy when:

Low latency is critical. Applications like in-memory databases or high-frequency trading systems require guaranteed co-location to minimize network round-trip time.

You have sufficient capacity. The physical host has enough resources to accommodate all instances in the server group without exhausting CPU, memory, or storage.

Failure of co-location is unacceptable. Your architecture or licensing requirements mandate that specific components run on the same hardware.

Trade-offs with hard affinity:

If the host reaches capacity, you cannot add more instances to the group until resources become available.

A single hardware failure affects all instances in the group, reducing overall fault tolerance.

You sacrifice flexibility for guaranteed co-location.

When to Use Hard Anti-Affinity

Choose anti-affinity policy when:

Fault tolerance is mandatory. Services like distributed databases, load balancer nodes, or control plane components require strict separation to survive single-host failures.

Capacity is sufficient. Your cluster has enough physical hosts to accommodate all instances in the group on distinct nodes.

Compliance requires isolation. Regulatory or security requirements mandate that certain workloads cannot share physical infrastructure.

Trade-offs with hard anti-affinity:

You may exhaust available hosts before exhausting compute capacity, leaving resources unused.

Scheduling new instances fails if no distinct hosts are available, even when total cluster capacity is sufficient.

You sacrifice resource efficiency for guaranteed fault isolation.

When to Use Soft-Affinity

Choose soft-affinity policy when:

Latency matters but availability matters more. You want to minimize network latency between instances but cannot risk scheduling failures during capacity constraints.

You scale dynamically. Auto-scaling groups or bursty workloads need the flexibility to create instances even when preferred hosts are full.

Capacity planning is uncertain. You operate in environments where resource availability fluctuates and strict co-location would cause operational issues.

Trade-offs with soft-affinity:

Instances may be placed on different hosts, reducing the performance benefit of co-location.

You get best-effort optimization rather than guaranteed placement.

Monitoring and alerting should track policy violations to understand actual placement.

When to Use Soft-Anti-Affinity

Choose soft-anti-affinity policy when:

Availability is important but not mission-critical. You want to spread instances for improved resilience but can tolerate occasional co-location.

Capacity is limited. Small clusters or resource-constrained environments benefit from flexible placement that maximizes utilization.

Operational simplicity matters. You want fault tolerance without the complexity of managing strict anti-affinity failures.

Trade-offs with soft-anti-affinity:

Multiple instances may land on the same host during capacity constraints, reducing fault isolation.

A single hardware failure could affect more instances than intended.

You trade strict availability guarantees for operational flexibility.

Scheduler Behavior with Soft Policies

When using soft policies, Nova's filter scheduler uses weighting functions to guide placement decisions.

Soft-affinity weighting: The AffinityWeight assigns higher weights to hosts that already run more instances from the server group. The scheduler prefers these hosts but will select other hosts if constraints prevent placement.

Soft-anti-affinity weighting: The AntiAffinityWeight assigns higher weights to hosts that run fewer instances from the server group. The scheduler spreads instances but allows co-location when necessary.

This best-effort approach ensures that instance creation succeeds even when ideal placement is impossible due to resource limits, filter constraints, or other scheduling factors.

Monitoring and Validation

Regardless of which policy you choose, monitor actual instance placement to ensure it aligns with your expectations.

Check instance placement across hosts:

1openstack server list --all-projects -c ID -c Name -c Host

Verify server group membership:

1openstack server group show <group-id>

Inspect instance host assignments:

1openstack server show <instance-id> -c OS-EXT-SRV-ATTR:host

For soft policies, tracking placement over time helps you understand how often the scheduler violates the preferred policy and whether you need to adjust capacity or change to a hard policy.

Combining Policies with Other Scheduler Features

Server group policies work alongside other Nova scheduler filters and weights. Consider how affinity policies interact with:

Availability zones: Anti-affinity policies can be combined with availability zone hints to spread instances across zones for additional fault tolerance.

Host aggregates: You can restrict server groups to specific aggregates based on hardware characteristics, licensing, or compliance requirements.

Complex anti-affinity: OpenStack supports max_server_per_host rules for anti-affinity policies, allowing more flexible placement where a limited number of instances per host is acceptable.

Flavor extra specs: Combine affinity policies with CPU pinning, NUMA topology, or other flavor-based scheduling constraints for optimized performance.

Common Mistakes to Avoid

Using hard anti-affinity in small clusters. If you have fewer physical hosts than instances in a server group with anti-affinity, scheduling will fail. Use soft-anti-affinity or increase cluster size.

Ignoring capacity planning with hard affinity. A single host may not have enough resources to accommodate all instances. Monitor host utilization and plan for capacity needs.

Assuming soft policies always honor preferences. Soft policies are best-effort. Always validate actual placement and monitor for violations.

Mixing server group policies. Once a server group is created with a specific policy, you cannot change it. Plan your policy choice carefully before deploying instances.

Migrating Between Policies

If you need to change from one policy to another, you must:

  1. Create a new server group with the desired policy
  2. Boot new instances into the new server group
  3. Migrate workloads from old instances to new instances
  4. Delete old instances and the old server group

There is no direct way to move instances between server groups or change a server group's policy after creation.

Choosing the Right Policy for Your Application

Use this decision framework:

Start with your availability requirements. If a single hardware failure is unacceptable, choose anti-affinity. If co-location improves performance but is not mandatory, consider affinity.

Evaluate capacity constraints. If you have limited physical hosts or uncertain capacity, soft policies provide operational flexibility.

Consider operational complexity. Hard policies require more careful capacity planning and monitoring. Soft policies are more forgiving but require tracking actual placement.

Test both approaches. Deploy a test workload with each policy and measure performance, availability, and scheduling success rates in your environment.

Summary

OpenStack server groups provide four placement policies to control how instances are distributed across physical hosts. Hard policies enforce strict placement rules and fail scheduling if requirements cannot be met, while soft policies provide best-effort placement with flexibility during resource constraints.

Choose affinity or soft-affinity to minimize latency by co-locating instances on the same host. Choose anti-affinity or soft-anti-affinity to improve fault tolerance by spreading instances across hosts.

Hard policies are appropriate when you have sufficient capacity and strict placement guarantees are mandatory. Soft policies are better when operational flexibility, resource utilization, and guaranteed instance creation are priorities.

Understanding the trade-offs between hard and soft policies helps you balance application performance, availability, and operational complexity based on your specific workload requirements.