Most AWS frontend journeys begin with the same honest instinct: keep the first version simple. A build pipeline pushes static assets, S3 holds the files, CloudFront handles TLS and caching, and the team ships. For a genuinely static site, that decision is still excellent. It is cheap to operate, easy to explain, and fast all over the world.
The problem appears later. The frontend starts asking for behavior that is no longer static: deeper routing, request-time rendering, image or runtime optimizations, environment-specific configuration, or production controls that need to look like the rest of the platform. At that point, the hosting model is no longer a packaging detail. It becomes architecture.
The mistake is usually not starting with CloudFront + S3. The mistake is keeping a static-hosting model after the frontend begins to need request-time behavior, runtime configuration, or deployment controls that look like the rest of the production platform.
The real question: can this frontend be fully generated at build time and served as files, or does it now need server behavior? That single question usually points you to the right AWS model faster than any brand comparison does.
My practical framing is simple. Use CloudFront + S3 when the app is truly static. Use Amplify Hosting when you want a managed path to modern frontend behavior. Use ECS on Fargate when the frontend has crossed the line into being an application service.
Option 1 — CloudFront + S3
Why teams start here
CloudFront + S3 is still the cleanest answer for static frontends. That includes documentation portals, marketing sites, lightweight SPAs, and Next.js projects that can be exported as static output via the output: 'export' setting in next.config.js. Operationally, this model stays small: no app server fleet, no container runtime, and no load balancer in the middle.
It also scales in the most comfortable way. Most requests are served from the edge, the origin is just a bucket, and your deployment story is mostly about building assets and invalidating cache when needed.

Figure 1 — Build artifacts flow into S3; users hit CloudFront; the edge cache absorbs most traffic.

Figure 2 — S3 is only contacted when the edge cache misses.
Where the model starts to hurt
This is where many teams get tripped up. They do not make a bad initial choice; they simply keep a static model after the app stops behaving like a static site. Once request-time behavior enters the picture, CloudFront + S3 begins to accumulate workarounds.
Two details matter more than most teams expect. First, S3 website endpoints support only HTTP, not HTTPS. Second, if you use a website endpoint as the CloudFront origin so you can get website-style behavior such as automatic index document handling, CloudFront treats it as a custom origin, which means Origin Access Control (OAC) is not available.
Deep-link routing for SPAs also needs care. Since S3 only serves objects that actually exist, a direct browser request to a route such as /dashboard or /products/123 may not map to a real object in the bucket and may return a 403 or 404. For SPA-style routing, this is usually handled with CloudFront custom error responses that return /index.html for 403 and 404 responses, or with a CloudFront Function that rewrites clean URLs to the appropriate index.html path.
That works for static sites and SPAs, but it is still a routing workaround, not a runtime application model. If routes require request-time rendering, server-side data fetching, middleware, personalization, or framework runtime behavior, CloudFront + S3 starts to become a poor fit.
Option 2 — Amplify Hosting
The middle ground many teams should evaluate earlier
Amplify is the answer when CloudFront + S3 feels too static, but running containers for a frontend still feels unnecessary. It gives you Git-based deployments, a managed CDN, and managed SSR support without asking the team to own the underlying runtime in the same way ECS does.
That makes Amplify particularly attractive for modern frontend teams: product keeps a fast deploy loop, QA gets predictable branch-based delivery, and engineering can support SSR behavior without building a bespoke hosting stack first.

Figure 3 — Git push triggers a managed build; the CDN routes requests to either static assets or managed SSR compute.

Figure 4 — Static routes are served from cache; SSR routes are rendered on managed compute.
Why this is not just "CloudFront + S3 with nicer buttons"
Amplify is opinionated in a useful way. AWS documents managed SSR support for modern Next.js versions, automatic framework detection, and a deployment model that is closer to a frontend platform than a raw infrastructure stack. For many teams, that is exactly the point.
The tradeoff is that you are accepting the boundaries of a managed service. Server-side environment variables, for example, are not exposed automatically the way client-side ones are; they need to be set up explicitly via the Amplify console or build spec. And new framework features sometimes ship to self-hosted Next.js before they reach Amplify. If your goal is to minimize hosting decisions and keep the frontend delivery experience as product-friendly as possible, Amplify is often the cleanest middle path.
Option 3 — ECS on Fargate
When the frontend becomes part of the application platform
There is a moment when the right question stops being "How do we host this site?" and becomes "How do we run this service?" That is the handoff point to ECS on Fargate.
This model makes sense when the frontend needs runtime secrets, deliberate health checks, explicit rollout control via mechanisms like the ECS deployment circuit breaker, target-tracking autoscaling, private networking, and the same operational standards as the rest of your production estate. It is not the lowest-effort option. It is the highest-control option.

Figure 5 — The pipeline pushes images to ECR and updates the ECS service; the ALB routes user traffic to healthy tasks.

Figure 6 — Deployment phase rolls out new tasks behind health checks; steady-state requests reach only healthy targets.
The thing Next.js teams underestimate in containers
When teams move Next.js into containers, they usually think first about deployment. They should also think about cache behavior. Self-hosted Next.js is straightforward on a single instance, but multi-instance behavior is a design topic, not an implementation detail. Once multiple tasks can serve the same content, the Next.js incremental cache (used by ISR and the data cache) needs an explicit shared cache handler — a Redis or S3 backend, for example — instead of the per-task default. Otherwise you get inconsistent pages between replicas.
That is why ECS is best reserved for frontends that have clearly joined the application platform. Once you truly need runtime control, the extra operational responsibility is justified. Before that point, it is often more platform than the problem requires.
A simple decision matrix
| Model | Best when | Biggest strength | Main tradeoff | Default advice |
|---|---|---|---|---|
| CloudFront + S3 | The app is truly static or can be fully generated at build time. | Very low ops, very fast, globally cached. | Starts to fight you once request-time behavior enters the picture. | Start here for static sites. |
| Amplify Hosting | You want modern frontend behavior without running containers. | Managed Git-based delivery with built-in SSR support. | Opinionated platform; some Next.js features land later. | Best middle ground. |
| ECS on Fargate | The frontend now behaves like an application service. | Full runtime, rollout, and scaling control. | Highest operational responsibility. | Use when the frontend joins the platform. |
What I would actually choose
If the frontend is still static, I would not move away from CloudFront + S3. It remains a strong design. If the team needs modern frontend behavior but wants to stay out of container operations, I would take Amplify seriously and early. If the frontend now needs to live under the same release, security, and observability standards as backend services, I would move it to ECS and treat it accordingly.
The best AWS frontend architecture is usually not the most powerful one. It is the one whose operating model still matches the app's actual behavior.
This article was originally published by DEV Community and written by Muhammad Ahmad Khan.
Read original article on DEV Community