Skip to main content
IMHCloud Logo
Back to blog home
Cost Management

AWS Cost Optimization: $2 Million in Lessons from the Trenches

Real AWS cost savings strategies from engineers who saved millions. Learn DynamoDB, EBS, and RDS optimization tactics.

When a Reddit discussion about DynamoDB cost optimization hit 582 upvotes in r/aws, something unusual happened. The original post got removed by moderators, but the community kept sharing their own cost optimization victories. What emerged was a masterclass in AWS efficiency, including one engineer who reported saving $2 million per year through systematic resource tuning.

The collective wisdom from 125 comments reveals patterns that apply far beyond DynamoDB. Here is what experienced cloud practitioners actually do to keep their AWS bills under control.

The $2 Million Annual Savings Story

One commenter dropped a figure that deserves attention:

"Saved my organization $2 million dollars/year solely by tuning resources and cutting waste."

That number was not achieved through a single discovery. It came from methodical work across the entire AWS footprint. The approach mirrors what other commenters described: examining the highest cost services systematically and making incremental improvements that compound over time.

Another practitioner shared their framework: reviewing one major service per month "under the microscope." This approach delivered a 25% cost reduction across their organization. Not overnight, but consistently. They described AWS billing as "death by a thousand cuts," which means the solution requires the same incremental discipline.

DynamoDB: Beyond Capacity Modes


The original post discussed switching from provisioned to on-demand capacity. The comments expanded on DynamoDB optimization with strategies that many teams overlook.

Standard Infrequent Access Storage

DynamoDB offers two storage classes. Standard Infrequent Access (Standard IA) costs approximately 60% less than standard storage, with a 25% increase in access costs. For tables with large datasets but lower read/write frequency, this math often works out favorably.

The catch: you cannot use reserved capacity with Standard IA. That creates a decision matrix where workload patterns determine the optimal configuration.

Reserved Capacity for Provisioned Workloads

Teams that genuinely need provisioned capacity should evaluate reserved capacity purchases:

  • 1 year commitment with partial upfront: 54% savings
  • 3 year commitment with partial upfront: approximately 77% savings

These numbers make provisioned capacity competitive again for stable, predictable workloads. The key is having enough historical data to commit confidently to capacity levels.

When On-Demand Actually Wins

On-demand pricing works best for unpredictable workloads or tables with significant traffic variation. AWS reduced on-demand DynamoDB pricing in late 2024, improving the economics further. But the comments made clear that on-demand is not universally cheaper. Workloads with consistent, predictable traffic often benefit from provisioned capacity with reservations.

The GP2 to GP3 Migration Nobody Talks About

Multiple commenters flagged the same overlooked optimization: migrating Elastic Block Store (EBS) volumes from GP2 to GP3.

"GP3 is a pretty good cost savings over GP2."

GP3 volumes provide better baseline performance at a lower cost than GP2. AWS introduced GP3 as the successor generation, yet many organizations still run GP2 volumes created before the transition. The migration requires no data movement for most cases, just a volume modification.

One commenter pointed out that legacy Terraform and CloudFormation templates often default to GP2 because they were written before GP3 existed. Updating infrastructure as code templates prevents the problem from recurring as new resources get deployed.

The performance improvement matters too. GP3 volumes deliver 3,000 IOPS and 125 MB/s throughput regardless of volume size, while GP2 performance scales with capacity. For smaller volumes, GP3 delivers better performance at lower cost.

RDS Storage Optimization: Stop Overpaying for IOPS

A recurring theme in the comments involved Relational Database Service (RDS) storage types. Multiple engineers reported switching from Provisioned IOPS (io1 or io2) to General Purpose SSD (gp3) with significant savings.

"Changed from provisioned IOPS to general SSDs... general SSDs suited our needs."

One commenter noted that AWS console defaults can steer teams toward io1/io2 storage when gp3 would suffice:

"Naughty of AWS to keep preselecting io1."

The decision should be based on actual IOPS requirements. Provisioned IOPS storage makes sense for databases that consistently hit high IOPS numbers. But many production databases run well below the gp3 baseline of 3,000 IOPS, making provisioned IOPS an expensive overspecification.

CloudWatch metrics for RDS show actual IOPS consumption. Comparing those metrics against gp3 baseline performance reveals whether the premium storage tier is justified.

Instance Storage for Transient Data

One commenter shared an architecture pattern that reduces EBS costs entirely for certain workloads:

"Using EC2 instance-provided storage for transient video processing data, EBS for permanent storage."

Instance store volumes come included with certain EC2 instance types at no additional cost. They provide high performance local storage that disappears when the instance stops. For processing pipelines where intermediate data does not need durability, instance storage eliminates EBS charges completely.

This approach requires architectural consideration. The application must tolerate data loss on instance termination. But for batch processing, video transcoding, or other ephemeral workloads, the economics are compelling.

Building a Sustainable Review Process


The comments converged on process recommendations that separate teams with controlled cloud costs from those constantly surprised by bills.

Monthly Service Reviews

The "one service per month under the microscope" approach provides sustainable depth without overwhelming the team. Each month, one engineer examines:

  • Actual utilization versus provisioned capacity
  • Pricing tier alignment with usage patterns
  • Storage class and instance type optimization
  • Unused or orphaned resources

Over a year, every major service gets reviewed. The findings accumulate into substantial savings.

Tagging and Cost Allocation

Multiple commenters emphasized resource tagging as foundational to cost management:

  • Tag by team or business unit for billing allocation
  • Tag by environment (production, staging, development) to identify optimization candidates
  • Use AWS Config to enforce tagging compliance

Without tags, cost attribution becomes guesswork. With consistent tagging, teams take ownership of their portion of the bill.

Budget Alerts with Escalation

AWS Budgets supports alerts that notify responsible parties when spending approaches thresholds. The comments suggested configuring alerts to escalate to team leads when budgets exceed acceptable ranges. This creates accountability without requiring constant manual monitoring.

Cost Explorer as a Regular Habit

AWS Cost Explorer provides the visibility needed for optimization work. Several commenters mentioned making Cost Explorer reviews a weekly or biweekly habit rather than a monthly afterthought. Patterns become visible faster when reviewed frequently.

The Broader Pattern: Infrastructure Archaeology

These optimization opportunities share a common characteristic. They exist because infrastructure gets configured once and then left alone while business requirements evolve. The configuration that made sense two years ago may be wildly inefficient today.

Infrastructure archaeology, the practice of systematically revisiting old configurations, deserves a place in every cloud team's workflow. The engineers in this Reddit discussion found savings by asking simple questions:

  • When was this resource last evaluated?
  • Do current traffic patterns match the original assumptions?
  • Have pricing changes made alternative configurations more attractive?
  • What would we configure differently if starting fresh today?

The 70% savings that started this discussion came from exactly this kind of review. The $2 million annual savings came from doing it comprehensively.

Why Transparent Pricing Simplifies Everything

The complexity in this discussion, capacity modes, storage classes, reserved instances, provisioned IOPS, reflects the pricing structures that hyperscalers have built. Each configuration decision creates an optimization opportunity, but also a potential cost trap.

At InMotion Cloud, we architect Virtual Private Cloud (VPC) environments with straightforward pricing specifically because teams should spend their energy on building applications, not auditing billing configurations. Transparent, predictable pricing means your infrastructure costs match your expectations without requiring monthly archaeology expeditions.

The engineers in this Reddit thread are clearly skilled practitioners. They have the knowledge to navigate AWS pricing complexity. But the time they spend optimizing configurations is time not spent on product development, customer features, or innovation.

Cloud infrastructure should enable your business, not create a second job managing billing complexity.

Taking Action

If you recognize your organization in these stories, start with the high impact items the community identified:

  1. Audit DynamoDB tables for capacity mode and storage class alignment with actual usage
  2. Identify GP2 volumes and evaluate GP3 migration
  3. Review RDS storage types against actual IOPS consumption
  4. Update infrastructure templates to use current generation defaults
  5. Establish a monthly review cadence for one major service at a time
  6. Implement tagging standards for cost allocation and accountability

The Reddit community proved that these optimizations deliver real savings. The $2 million example may be exceptional, but the 25% reduction from systematic review is achievable for most organizations with AWS footprints of meaningful size.

Your infrastructure has these savings hiding somewhere. The question is whether you find them through intentional review or continue paying until someone eventually notices.