Skip to main content
Cristhian Villegas
Cloud12 min read0 views

10 Practical Ways to Reduce Your AWS Bill in 2026

10 Practical Ways to Reduce Your AWS Bill in 2026

Your AWS Bill Is Probably Higher Than It Should Be

If you've ever opened your AWS bill and felt a knot in your stomach, you're not alone. Studies show that up to 35% of cloud spending is wasted on idle, oversized, or forgotten resources. The good news? Most of that waste is fixable with a systematic approach.

In this guide, we'll walk through 10 practical strategies to reduce your AWS costs — from quick wins you can do today to long-term optimizations that compound savings over time.

Cloud infrastructure visualization representing AWS cloud costs

Source: NASA — Unsplash

1. Audit What You're Actually Using

Before optimizing anything, you need visibility. The first step is understanding what you're paying for and whether it's actually being used.

AWS Cost Explorer

Cost Explorer is your starting point. It's free, built into the console, and gives you a breakdown of spending by service, region, and time period.

bash
1# Use the AWS CLI to get cost data for the last 30 days
2aws ce get-cost-and-usage \
3  --time-period Start=2026-03-01,End=2026-03-31 \
4  --granularity MONTHLY \
5  --metrics "BlendedCost" \
6  --group-by Type=DIMENSION,Key=SERVICE
7
8# Output example:
9# EC2-Instances:    $1,245.00
10# RDS:              $890.00
11# S3:               $156.00
12# NAT Gateway:      $312.00  <-- often a surprise!

AWS Trusted Advisor

Trusted Advisor scans your account for idle and underutilized resources. Even the free tier checks for:

  • Idle load balancers with no healthy instances
  • Unassociated Elastic IPs (you're charged for these!)
  • Low-utilization EC2 instances running below 10% CPU
  • Idle RDS instances with no connections
💡 Quick Win: Run Trusted Advisor right now. Most teams find at least $50-200/month in idle resources on the first scan.

2. Right-Size Your Instances

Right-sizing means matching your instance types to actual workload requirements. It's the single most impactful optimization for most teams.

How to Identify Oversized Instances

Use AWS Compute Optimizer (free) to get recommendations based on CloudWatch metrics over the last 14 days:

bash
1# Get right-sizing recommendations
2aws compute-optimizer get-ec2-instance-recommendations \
3  --filters name=Finding,values=OVER_PROVISIONED
4
5# Common findings:
6# t3.xlarge (4 vCPU, 16GB) at 8% CPU → recommend t3.medium (2 vCPU, 4GB)
7# m5.2xlarge running a lightweight API → recommend t3.large
8# r5.large with 3GB memory used out of 16GB → recommend t3.medium

The Right-Sizing Process

  1. Enable detailed CloudWatch monitoring (1-minute intervals) on your instances
  2. Collect at least 2 weeks of data to capture peak and off-peak patterns
  3. Use Compute Optimizer or third-party tools like Datadog/Spot.io to analyze
  4. Start with non-production environments — they're usually the most oversized
  5. Downsize one step at a time and monitor for performance impacts
⚠️ Don't over-optimize: Leave 20-30% CPU headroom for traffic spikes. Right-sizing doesn't mean under-sizing.

3. Reserved Instances vs. Savings Plans

If you have predictable, steady-state workloads, you're leaving money on the table by paying on-demand prices.

FeatureReserved Instances (RI)Savings Plans (SP)
DiscountUp to 72%Up to 72%
FlexibilityLocked to instance type + regionAny instance type, any region
Applies toEC2, RDS, ElastiCache, RedshiftEC2, Fargate, Lambda
Term1 or 3 years1 or 3 years
Best forStable, predictable workloadsTeams that change instance types often

Which Should You Choose?

bash
1# Check your current RI coverage and utilization
2aws ce get-reservation-utilization \
3  --time-period Start=2026-03-01,End=2026-03-31 \
4  --granularity MONTHLY
5
6# Check Savings Plans recommendations
7aws ce get-savings-plans-purchase-recommendation \
8  --savings-plans-type COMPUTE_SP \
9  --term-in-years ONE_YEAR \
10  --payment-option NO_UPFRONT \
11  --lookback-period-in-days SIXTY_DAYS
📊 Rule of Thumb: Use Savings Plans for EC2/Fargate/Lambda (more flexibility). Use Reserved Instances for RDS and ElastiCache (Savings Plans don't cover them).

4. Kill Orphaned Resources

Orphaned resources are cloud resources that no longer serve a purpose but keep generating charges. They're the "forgotten subscriptions" of AWS.

Common Orphaned Resources

bash
1# Find unattached EBS volumes (you're paying for storage!)
2aws ec2 describe-volumes \
3  --filters Name=status,Values=available \
4  --query 'Volumes[*].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}' \
5  --output table
6
7# Find unassociated Elastic IPs ($3.65/month each when idle!)
8aws ec2 describe-addresses \
9  --query 'Addresses[?AssociationId==null].{IP:PublicIp,AllocationId:AllocationId}' \
10  --output table
11
12# Find old EBS snapshots (sorted by age)
13aws ec2 describe-snapshots --owner-ids self \
14  --query 'sort_by(Snapshots, &StartTime)[0:20].{ID:SnapshotId,Size:VolumeSize,Date:StartTime,Desc:Description}' \
15  --output table
16
17# Find unused NAT Gateways (these are expensive — ~$32/month + data processing)
18aws ec2 describe-nat-gateways \
19  --filter Name=state,Values=available \
20  --query 'NatGateways[*].{ID:NatGatewayId,SubnetId:SubnetId,State:State}' \
21  --output table
🚨 Before Deleting: Always check if a resource is referenced by infrastructure-as-code (Terraform, CloudFormation). Deleting managed resources manually will cause state drift.

5. Optimize S3 Storage with Intelligent-Tiering

S3 costs sneak up on you, especially if you have large datasets that are rarely accessed. S3 Intelligent-Tiering automatically moves objects between access tiers based on usage patterns — at no retrieval cost.

How It Works

  • Frequent Access — default tier, standard S3 pricing
  • Infrequent Access — objects not accessed for 30 days, ~40% cheaper
  • Archive Instant Access — 90 days, ~68% cheaper
  • Archive Access — 90-180 days, ~71% cheaper (optional, minutes to restore)
  • Deep Archive — 180+ days, ~95% cheaper (optional, hours to restore)
bash
1# Apply Intelligent-Tiering to an existing bucket via lifecycle policy
2aws s3api put-bucket-lifecycle-configuration \
3  --bucket my-data-bucket \
4  --lifecycle-configuration '{
5    "Rules": [
6      {
7        "ID": "IntelligentTieringRule",
8        "Status": "Enabled",
9        "Filter": { "Prefix": "" },
10        "Transitions": [
11          {
12            "Days": 0,
13            "StorageClass": "INTELLIGENT_TIERING"
14          }
15        ]
16      }
17    ]
18  }'
19
20# Enable optional archive tiers
21aws s3api put-bucket-intelligent-tiering-configuration \
22  --bucket my-data-bucket \
23  --id "ArchiveConfig" \
24  --intelligent-tiering-configuration '{
25    "Id": "ArchiveConfig",
26    "Status": "Enabled",
27    "Tierings": [
28      { "AccessTier": "ARCHIVE_ACCESS", "Days": 90 },
29      { "AccessTier": "DEEP_ARCHIVE_ACCESS", "Days": 180 }
30    ]
31  }'

6. Shut Down Dev/Staging Environments After Hours

Your development and staging environments probably run 24/7, but your team only works 8-10 hours a day. That's 60-70% waste.

Automated Scheduling with AWS Instance Scheduler

AWS provides a free solution called Instance Scheduler on AWS that can start/stop EC2 and RDS instances on a schedule:

yaml
1# Example: CloudFormation tag-based schedule
2# Tag your dev instances with: Schedule = office-hours
3
4# Instance Scheduler configuration
5Periods:
6  - Name: office-hours
7    BeginTime: "08:00"
8    EndTime: "20:00"
9    WeekDays: mon-fri
10
11# Savings calculation:
12# t3.xlarge on-demand: $0.1664/hr
13# Running 24/7: $0.1664 × 730 = $121.47/month
14# Running office hours only: $0.1664 × 260 = $43.26/month
15# Savings: $78.21/month per instance (64%)

Simple Bash Script Alternative

bash
1#!/bin/bash
2# stop-dev-instances.sh — run via cron or EventBridge at 8pm
3
4INSTANCE_IDS=$(aws ec2 describe-instances \
5  --filters "Name=tag:Environment,Values=dev,staging" \
6             "Name=instance-state-name,Values=running" \
7  --query 'Reservations[*].Instances[*].InstanceId' \
8  --output text)
9
10if [ -n "$INSTANCE_IDS" ]; then
11  echo "Stopping dev instances: $INSTANCE_IDS"
12  aws ec2 stop-instances --instance-ids $INSTANCE_IDS
13fi
14
15# For RDS
16DEV_DBS=$(aws rds describe-db-instances \
17  --query 'DBInstances[?TagList[?Key==`Environment` && Value==`dev`]].DBInstanceIdentifier' \
18  --output text)
19
20for db in $DEV_DBS; do
21  echo "Stopping RDS: $db"
22  aws rds stop-db-instance --db-instance-identifier $db
23done
💡 Pro Tip: Use Amazon EventBridge Scheduler instead of cron. It's serverless, costs nothing for this use case, and supports IAM roles natively.

7. Use Spot Instances for Fault-Tolerant Workloads

Spot Instances offer up to 90% discount compared to on-demand prices. The catch? AWS can reclaim them with a 2-minute warning. But for many workloads, that's perfectly fine.

Ideal Use Cases for Spot

  • CI/CD pipelines — build jobs are ephemeral by nature
  • Data processing — batch ETL, Spark/EMR jobs
  • Testing environments — load tests, integration test suites
  • Containerized workloads — ECS/EKS with multiple task replicas
  • Machine learning training — checkpoint and resume
bash
1# Check current Spot prices for your desired instance type
2aws ec2 describe-spot-price-history \
3  --instance-types t3.large m5.large c5.large \
4  --product-descriptions "Linux/UNIX" \
5  --start-time $(date -u +%Y-%m-%dT%H:%M:%S) \
6  --query 'SpotPriceHistory[*].{Type:InstanceType,AZ:AvailabilityZone,Price:SpotPrice}' \
7  --output table
8
9# Example output:
10# t3.large  | us-east-1a | 0.0250  (on-demand: $0.0832 → 70% savings)
11# m5.large  | us-east-1b | 0.0310  (on-demand: $0.0960 → 68% savings)
12# c5.large  | us-east-1a | 0.0280  (on-demand: $0.0850 → 67% savings)

Analytics dashboard representing AWS cost monitoring and optimization

Source: Luke Chesser — Unsplash

8. Implement Cost Allocation Tags

You can't optimize what you can't measure. Cost allocation tags let you break down your AWS bill by team, project, environment, or any other dimension that matters to your organization.

Essential Tags to Implement

bash
1# Recommended tagging strategy
2aws ec2 create-tags --resources i-0abc123def456 --tags \
3  Key=Environment,Value=production \
4  Key=Team,Value=backend \
5  Key=Project,Value=payments-api \
6  Key=CostCenter,Value=engineering \
7  Key=Owner,[email protected] \
8  Key=ManagedBy,Value=terraform
9
10# Enforce tags with AWS Organizations SCP (Service Control Policy)
11# This prevents creating resources without required tags
json
1{
2  "Version": "2012-10-17",
3  "Statement": [
4    {
5      "Sid": "RequireTags",
6      "Effect": "Deny",
7      "Action": [
8        "ec2:RunInstances",
9        "rds:CreateDBInstance",
10        "s3:CreateBucket"
11      ],
12      "Resource": "*",
13      "Condition": {
14        "Null": {
15          "aws:RequestTag/Environment": "true",
16          "aws:RequestTag/Team": "true",
17          "aws:RequestTag/Project": "true"
18        }
19      }
20    }
21  ]
22}
📊 Important: After creating tags, you must activate them as cost allocation tags in the Billing console. Tags won't appear in Cost Explorer until activated.

9. Set Up AWS Budgets and Alerts

Prevention is cheaper than cure. AWS Budgets lets you set spending thresholds and get notified before costs spiral out of control.

bash
1# Create a monthly budget with alerts at 80% and 100%
2aws budgets create-budget --account-id 123456789012 \
3  --budget '{
4    "BudgetName": "Monthly-Total",
5    "BudgetLimit": { "Amount": "5000", "Unit": "USD" },
6    "BudgetType": "COST",
7    "TimeUnit": "MONTHLY"
8  }' \
9  --notifications-with-subscribers '[
10    {
11      "Notification": {
12        "NotificationType": "ACTUAL",
13        "ComparisonOperator": "GREATER_THAN",
14        "Threshold": 80,
15        "ThresholdType": "PERCENTAGE"
16      },
17      "Subscribers": [
18        { "SubscriptionType": "EMAIL", "Address": "[email protected]" },
19        { "SubscriptionType": "SNS", "Address": "arn:aws:sns:us-east-1:123456789012:billing-alerts" }
20      ]
21    },
22    {
23      "Notification": {
24        "NotificationType": "FORECASTED",
25        "ComparisonOperator": "GREATER_THAN",
26        "Threshold": 100,
27        "ThresholdType": "PERCENTAGE"
28      },
29      "Subscribers": [
30        { "SubscriptionType": "EMAIL", "Address": "[email protected]" }
31      ]
32    }
33  ]'
⚠️ Note: AWS Budgets alerts are not real-time — there can be a delay of up to 24 hours. For real-time anomaly detection, consider AWS Cost Anomaly Detection, which uses ML to identify unexpected spending patterns.

10. Bonus Tips: The Low-Hanging Fruit

Here are additional quick wins that are often overlooked:

Data Transfer Costs

  • Use VPC Endpoints for S3 and DynamoDB — avoid NAT Gateway data processing charges ($0.045/GB)
  • Keep resources in the same Availability Zone when possible — cross-AZ transfer costs $0.01/GB
  • Use CloudFront — data transfer from CloudFront to the internet is cheaper than from EC2 directly

RDS Optimization

  • Use Aurora Serverless v2 for variable workloads — scales down to 0.5 ACU when idle
  • Delete old manual RDS snapshots — automated snapshots are free, manual ones aren't
  • Consider Aurora I/O-Optimized if I/O costs exceed 25% of your database bill

Lambda & Serverless

  • Optimize Lambda memory allocation — more memory = more CPU = faster execution = less cost in many cases
  • Use Graviton (ARM) processors — 20% cheaper, often 20% faster for Lambda and EC2
  • Enable Lambda Provisioned Concurrency only for latency-sensitive functions, not all of them
bash
1# Use AWS Lambda Power Tuning to find optimal memory
2# https://github.com/alexcasalboni/aws-lambda-power-tuning
3
4# Check if Graviton instances are available for your workload
5aws ec2 describe-instance-types \
6  --filters "Name=processor-info.supported-architecture,Values=arm64" \
7  --query 'InstanceTypes[?starts_with(InstanceType, `t4g`) || starts_with(InstanceType, `m7g`)].{Type:InstanceType,vCPUs:VCpuInfo.DefaultVCpus,Memory:MemoryInfo.SizeInMiB}' \
8  --output table

Building a Cost Optimization Culture

The most effective cost optimization isn't a one-time cleanup — it's a culture shift. Here's how to make it stick:

  1. Weekly cost reviews — spend 15 minutes reviewing the Cost Explorer dashboard with your team
  2. Cost ownership — each team should own and be accountable for their cloud spend
  3. Right-size before scaling — make it a checklist item in your deployment process
  4. Automate cleanup — schedule scripts to find and alert on orphaned resources
  5. Tag everything — no tag, no deploy. Enforce it with SCPs
💡 Final Tip: Start with the biggest line items in your bill. Optimizing a $2,000/month EC2 fleet by 30% saves more than eliminating a $50/month S3 bucket entirely. Focus on impact, not completeness.

Conclusion

Reducing your AWS bill doesn't require a complete architecture overhaul. By systematically auditing usage, right-sizing instances, committing to Savings Plans, cleaning up orphaned resources, and automating schedules, most teams can reduce their cloud spend by 30-50%.

The key is to make cost optimization a habit, not a project. Set up your tags, configure your budgets, schedule your cleanups, and review your spending regularly. Your CFO will thank you.

Share:
CV

Cristhian Villegas

Software Engineer specializing in Java, Spring Boot, Angular & AWS. Building scalable distributed systems with clean architecture.

Comments

Sign in to leave a comment

No comments yet. Be the first!

Related Articles

Stay updated

Get notified when I publish new articles. No spam, unsubscribe anytime.