It started with a bill. Not a catastrophic one — just the quiet kind that shows up at the end of the month and makes you go wait, what am I actually paying for? I had a side project running. Nothing fancy. A React frontend, a backend API, Postgres. The kind of thing you spin up in an afternoon. And somehow, by following every “best practice” guide I could find, I’d managed to rack up $60 a month on an app with maybe twelve users — half of which were me testing things.
So I sat down and actually thought through what I needed versus what I was paying for. Turns out those two things were very different lists.
The textbook architecture will hurt you
Here’s what the AWS docs will lead you to if you’re not careful:
| Service | Monthly Cost |
|---|---|
| RDS db.t4g.micro | ~$15 |
| EC2 t3.small | ~$17 |
| Application Load Balancer | ~$16-24 |
| S3 + CloudFront | ~$5-10 |
| Total | $53-66+ |
The ALB alone costs $16.43/month just to exist — that’s before a single request hits it. It’s basically a tax you pay for SSL termination and routing, both of which can be done other ways for free. RDS is genuinely great software, but $13-15/month for automated backups and failover on a project that might not survive the month feels like jumping ahead a few chapters.
None of this is AWS being greedy. These are the right tools at scale. The problem is you’re not at scale yet.
What I actually run
The architecture I landed on splits into two clean parts. The frontend lives on AWS’s edge network and costs almost nothing. The backend, proxy, and database all run together on one instance.
| Layer | Tech | Cost |
|---|---|---|
| Frontend | S3 + CloudFront + ACM | ~$0.50-2/mo |
| Compute | t4g.small | $0 (free trial through Dec 2026) |
| Orchestration | ECS on EC2 | $0 |
| Reverse proxy + SSL | Traefik + Let’s Encrypt | $0 |
| Database | Postgres on EBS | ~$1.20/mo |
| Static IP | Elastic IP | ~$3.60/mo ⚠️ |
Under $7 a month. Let me walk through why each piece is the way it is.
Frontend: S3 + CloudFront + ACM
If your frontend is a static build — React, Vue, Next.js export, whatever — you don’t need a server. Full stop.
S3 holds the files. CloudFront puts them on AWS’s global edge network, so someone loading your app from Tokyo isn’t hitting a bucket in Mumbai. ACM gives you a free, auto-renewing SSL certificate that attaches directly to the CloudFront distribution. The whole layer costs between $0.50 and $2/month, and CloudFront’s free tier (1 TB transfer, 10M requests/month) means you won’t see a meaningful bill until you’re well past the side project stage.
One thing that tripped me up: for CloudFront, your ACM certificate must be in us-east-1 — regardless of where everything else lives. Don’t ask me why. Just do it in N. Virginia and move on.
Backend: t4g.small + ECS as the orchestration layer
The instance I use is a t4g.small — AWS’s Graviton2-based chip, 2 vCPUs, 2 GiB RAM. On-demand it’s $0.0168/hr, about $12/month. But AWS has been running a free trial on t4g since 2020, renewing it quietly every December, and it currently runs through December 31, 2026 — 750 free hours/month, available to all AWS customers. They’ve done it four years running. Check the T4g page each December and plan accordingly.
On top of that instance, I use Amazon ECS in EC2 launch mode — and this part is worth understanding properly, because most people only know ECS through Fargate.
ECS itself is a free service. It’s an orchestration layer — it manages your containers, handles restarts if something crashes, pulls new images when you deploy, and gives you a structured way to define what runs where. When you use Fargate, you’re paying for Fargate’s compute. When you use EC2 launch mode, you’re using your own instance and ECS is just the free manager sitting on top of it. Zero additional cost, real operational discipline.
So on that one t4g.small, ECS is managing three containers: my backend API, a Postgres database, and Traefik as the reverse proxy. ECS keeps them running, restarts them if they die, and handles rolling deploys when I push a new image. It’s the kind of thing you’d otherwise build with systemd unit files and shell scripts — except it’s already built, it’s free, and it works.
{
"family": "app-task",
"networkMode": "bridge",
"containerDefinitions": [
{ "name": "traefik", "image": "traefik:v3", "portMappings": [{ "containerPort": 443 }] },
{ "name": "backend", "image": "your-ecr-repo/api:latest" },
{ "name": "postgres", "image": "postgres:16-alpine", "mountPoints": [{ "sourceVolume": "pgdata" }] }
],
"volumes": [{ "name": "pgdata", "host": { "sourcePath": "/mnt/ebs/pgdata" } }]
}
Replacing the $16 load balancer with Traefik
An ALB is $16.43/month just in base fees before traffic. For a side project, you’re paying that mostly to get SSL termination and routing — both of which Traefik does for free.
Traefik runs as a container, talks to Let’s Encrypt, gets your certificate, renews it automatically, and routes incoming traffic to your backend based on your config. Zero cost. And honestly, less operational overhead than messing with ACM through an ALB once you’ve got the Traefik config down.
The honest tradeoff: this isn’t multi-AZ. If the instance goes down, the backend goes down until ECS bounces the container — usually under a minute. For a side project that’s fine. For anything with paying customers and a real SLA, you’ll want the ALB eventually. But by then you’ll have the revenue to justify it.
Database: Postgres on EBS, not RDS
RDS is great software. It’s also $13-15/month before storage, for automated failover and read replicas you probably don’t need yet.
Running Postgres in a container works fine. There’s exactly one thing you must get right: mount the data directory to an EBS volume. The container is disposable. The volume isn’t. If you let Postgres write to the container filesystem and ECS restarts it, you’ve lost your data. Point it at /mnt/ebs/pgdata on an EBS gp3 volume and you’re safe. EBS gp3 is $0.08/GB/month — 15 GB runs $1.20.
Add a nightly pg_dump to S3 for backups. A few cents. Now you have something reasonably durable at a fraction of the RDS price.
The Elastic IP thing nobody warns you about
As of February 2024, AWS charges $0.005/hr (~$3.60/month) for every public IPv4 address — including ones attached to running instances. It used to be free. It isn’t anymore. You need a stable IP for DNS, so this isn’t avoidable — it’s just the new reality. On a t4g.small during the free trial, the Elastic IP ends up being your biggest line item. Still worth it.
CI/CD: just use GitHub Actions
GitHub Actions is free for public repos and gives you 2,000 minutes/month on private ones. That’s more than enough. Push to main, Actions builds your Docker image, pushes it to ECR, and calls aws ecs update-service --force-new-deployment. ECS handles the rest — pulls the new image, drains the old container, starts the new one. The frontend deploys in parallel via s3 sync and a CloudFront invalidation.
jobs:
deploy-frontend:
steps:
- run: npm run build
- run: aws s3 sync dist/ s3://$BUCKET --delete
- run: aws cloudfront create-invalidation --paths "/*"
deploy-backend:
steps:
- run: docker build -t $ECR_REPO .
- run: docker push $ECR_REPO
- run: aws ecs update-service --force-new-deployment
No Jenkins. No CircleCI bill. No pipeline infrastructure to maintain.
Let AI write the Terraform, but tell it your constraints
Write your infrastructure as Terraform — not because it’s trendy, but because clicking through the AWS console is how you end up with a setup you can’t reproduce or explain six months later.
Use an AI to draft it. But be specific, because if you ask for “Terraform for a 3-tier app,” you’ll get RDS, an ALB, a NAT gateway, and a bill that looks exactly like the one that started this whole exercise. Give it your actual constraints:
“Write Terraform for a single t4g.small EC2 instance running ECS in EC2 launch mode. VPC with one public subnet, EBS volume for Postgres at /mnt/ebs/pgdata, Elastic IP, security group for 80/443, S3 + CloudFront for the frontend. No ALB. No RDS. No NAT gateway.”
That paragraph of context is worth $40/month in avoided services. Also ask for lifecycle { prevent_destroy = true } on the EBS volume — a terraform destroy that wipes your database is a very specific kind of bad day.
When to upgrade
This setup has real limits. Single point of failure on the backend. No automated database failover. Traefik isn’t designed to be a production-grade load balancer at serious scale. When the traffic gets real and the SLA expectations get serious, these will start to matter.
The exit is straightforward: move to RDS when you want managed backups and Multi-AZ. Add the ALB when you need multiple backend instances. Switch to Fargate when you’d rather not think about EC2 at all. The value of running the lean version first is that you’ll understand exactly what you’re buying when you upgrade — and you won’t be buying it before you need it.
Until then — ship the thing. Your wallet will thank you.
Prices verified against AWS pricing pages · April 2026 · t4g.small free trial valid through December 31, 2026