Simple and Cheap Scaling with AWS ELB and EC2 Auto Scaling | DOP-C01 Exam Answer

Reduce Costs and Scale with Spikes - DOP-C01 Exam Answer

Prev Question Next Question

Question

Your web application consists of 10% writes and 90% reads.

You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group.

Your system is getting very expensive when there are large read traffic spikes during certain news events, during which many more people access your web application, all at the same time.

What is the simplest and cheapest way to reduce costs and scale with spikes like this?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer - C.

Use CloudFront distribution for distributing the heavy reads for your application.

You can create a zone apex record to point to the CloudFront distribution.

You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin.

Reducing the duration allows you to serve dynamic content.

Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache.

A longer duration also reduces the load on your origin.

For more information on CloudFront object expiration, please visit the below URL:

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

The best option to reduce costs and scale with spikes in read traffic is to use Amazon CloudFront, which is a Content Delivery Network (CDN) service provided by AWS. CloudFront can cache content at edge locations that are geographically closer to the users, reducing the load on the origin server and improving performance.

Option C is the correct answer, which suggests creating a CloudFront distribution and directing Route53 to the distribution. The ELB can be used as an origin for the distribution, and cache behaviors can be specified to proxy cache requests. Here is a detailed explanation of how this solution works:

  1. Create a CloudFront Distribution: Firstly, create a CloudFront distribution in front of the ELB to cache the most frequently accessed content, such as images, videos, and static web pages. CloudFront has a global network of edge locations, and it caches the content at the edge locations closer to the users, reducing the latency and load on the origin server.

  2. Direct Route53 to CloudFront: Update the Route53 alias record to point to the CloudFront distribution instead of the ELB. By doing this, the requests from the users will be redirected to the nearest CloudFront edge location, improving the performance and reducing the load on the ELB and EC2 instances.

  3. Use ELB as an Origin for CloudFront: Use the ELB as an origin for the CloudFront distribution. This ensures that the content served to the users is always up to date. The ELB automatically distributes the incoming traffic to the EC2 instances behind it.

  4. Specify Cache Behaviors: Specify cache behaviors for the CloudFront distribution. This allows you to control how the content is cached at the edge locations. For example, you can set different TTLs (Time To Live) for different types of content or specify which requests should not be cached at all.

By implementing this solution, you can reduce the load on the EC2 instances and ELB, improve performance, and reduce costs. CloudFront is a cost-effective solution as it charges based on the amount of data transferred and the number of requests, and it has no minimum usage commitment.

Option A is not a suitable solution for this scenario because it involves asynchronously replicating common request responses into S3 objects, which would not be effective for dynamic content. It may also introduce additional latency for the requests that need to be redirected to S3.

Option B is not a good solution as it involves adding another layer to the system, which would increase the complexity and cost. It would also require additional effort to configure the routing and scaling policies.

Option D is not the best solution for this scenario as it only improves the performance of requests that can be served from the in-memory cache. It does not address the issue of scaling with spikes in read traffic, and it may even increase the load on the EC2 instances and ELB.