Dasnuve

AWS Cost Optimization Checklist: 10 Ways to Cut Your Bill by 40%

AWS bills grow by default. Most cloud environments carry 15–40% in reducible waste. Here are the 10 most impactful places to look and what each saves.

AWScloud infrastructurecost optimizationDevOps

Your AWS bill grows by default. Resources get provisioned for projects that never launch. Development environments stay running over weekends and holidays. Databases are sized for peak load and run at 5% utilization during normal operations. S3 buckets accumulate objects on the wrong storage class. Every growing engineering team has this problem.

The good news: most AWS environments carry 15–40% in reducible waste, and the highest-impact items are not complicated to address. Here are 10 areas to audit.

1. Right-size EC2 and RDS instances

The most common and most impactful source of waste. Use AWS Cost Explorer's Right Sizing Recommendations to identify instances running at consistently low CPU and memory utilization. A db.r6g.2xlarge running at 12% average CPU is likely a db.r6g.large, a 50% cost reduction with no architectural change. The savings on a single overprovisioned database instance often run $400–$1,200/month.

Pay attention to memory utilization too, not just CPU. RDS instances are frequently chosen based on CPU requirements while memory is overlooked, resulting in instances sized for a workload that never materializes.

2. Purchase Reserved Instances or Compute Savings Plans

On-demand pricing exists for unpredictable workloads. If you have been running the same instance family in the same region for six months or more, you are eligible for a 1-year commitment that cuts costs 40–60%. Compute Savings Plans offer similar savings with more flexibility. They apply to any EC2 usage regardless of instance family, size, or region.

Even partial coverage helps. If your baseline compute usage is $3,000/month and you purchase Savings Plans covering $2,000, you still capture most of the benefit while retaining flexibility for the remaining on-demand spend.

3. Move eligible workloads to Spot Instances

Batch jobs, data processing pipelines, CI/CD runners, and other fault-tolerant workloads can run on Spot Instances at 70–90% off on-demand pricing. Spot Instances can be interrupted with a 2-minute notice, so the workload needs to handle interruption gracefully, either by checkpointing progress or by being stateless and retriable.

If your workloads qualify, Spot is often the single largest cost reduction available. A data processing job running 8 hours per day on on-demand r5.2xlarge ($0.504/hour) costs $1,450/month. The same job on Spot might cost $300/month.

4. Audit and delete orphaned EBS volumes

When EC2 instances are terminated, their attached EBS volumes are often left behind. AWS does not automatically delete them. At $0.08–0.10/GB/month for gp3 storage, an orphaned 500GB volume costs $40–50/month. This is small per volume but compounds quickly in active environments.

Use the AWS console Volumes filter (State: Available) or AWS Trusted Advisor to find volumes with no EC2 attachment. Snapshot them first if there is any uncertainty, then delete.

5. Use S3 Intelligent-Tiering for irregular access patterns

Standard S3 storage is $0.023/GB/month. Glacier Instant Retrieval is $0.004/GB/month, 83% cheaper. S3 Intelligent-Tiering automatically moves objects between access tiers based on access patterns. Objects not accessed for 30 days move to Infrequent Access. Objects not accessed for 90 days move to Archive Instant Access.

For large buckets containing data that was actively used at some point but is accessed irregularly (backups, logs, processed data files), Intelligent- Tiering often cuts storage costs 40–60% with zero operational overhead.

6. Review NAT Gateway data processing charges

NAT Gateway charges $0.045 per GB of data processed in addition to the hourly cost. Workloads that transfer large amounts of data from private subnets to the internet, particularly data pipelines pulling from external APIs or logging systems shipping to external destinations, can generate surprising NAT Gateway line items.

For S3 and DynamoDB specifically, use VPC Gateway Endpoints to bypass NAT Gateway entirely. The endpoints are free and route traffic over AWS's internal network. This is a 15-minute change that eliminates NAT Gateway charges for those services.

7. Consolidate underutilized RDS instances

Development, staging, and QA environments often have their own dedicated database instances that sit idle most of the day. Migrating several low-utilization databases to a single larger shared instance (with separate schemas or databases for isolation) can significantly reduce your RDS line item.

RDS Proxy is also worth evaluating for applications with connection pooling challenges. It reduces connection overhead and can allow you to use a smaller instance class than your connection count would otherwise require.

8. Set up AWS Budgets and automated alerts

This will not reduce your bill directly, but it prevents surprises from turning into disasters. Set a budget alert at 80% of your monthly target and a separate alert at 100%. Add an anomaly detection budget that fires when daily spend increases by more than 20% day-over-day. This catches runaway Lambda functions, misconfigured data transfer, and other sudden cost spikes before they compound.

Without alerts, overspending is often discovered when the monthly invoice arrives. With alerts, you have days to investigate and correct.

9. Stop development environments outside business hours

A t3.medium EC2 instance running 24/7 costs approximately $30/month. The same instance running 9am–6pm Monday through Friday costs approximately $9/month, a 70% reduction. This applies equally to development databases, ECS tasks, and any compute that is only used during working hours.

AWS Instance Scheduler (now AWS Systems Manager Automation) can stop and start resources on a defined schedule. For ECS services, set the desired count to 0 outside business hours. For RDS, use the stop/start API on a scheduled basis (note: RDS instances automatically restart after 7 days of being stopped).

10. Audit cross-AZ data transfer costs

Cross-availability-zone data transfer costs $0.01/GB in each direction. This sounds trivial, but applications that make frequent internal API calls between services in different AZs, or that use public IP addresses for communication between services in the same VPC, accumulate meaningful transfer charges at scale.

Ensure your application targets same-AZ resources wherever possible using availability zone affinity in load balancer target groups. Use private IP addresses for inter-service communication, not public endpoints, to avoid unnecessary data transfer charges.

Where to start

Not all of these items carry equal weight. The highest-ROI starting points are typically Reserved Instance and Savings Plan purchases (immediate 40–60% savings on committed compute, no code changes required), followed by right-sizing (no commitment, immediate savings after a brief analysis), then orphaned resource cleanup.

A thorough AWS cost audit covering all 10 areas above, with actual recommendations and estimated savings per change, typically reveals $500–$5,000/month in reductions for a mid-sized cloud environment. We regularly find 15–40% in the first review, with the bulk of savings achievable within 30–60 days.

Work With Us

Want help putting this into practice?

We scope it first, price it second. Book a free discovery call and walk away with a clear picture of what to build and what it costs.

Book Your Free Discovery CallBook Your Free Discovery Call