Migrating On-Premise System to AWS: Tools & Techniques

Migrating On-Premise System to AWS

Prev Question Next Question

Question

A large trading company is using an on-premise system to analyze the trade data.

After the trading day closes, the data, including the day's transaction costs, execution reporting, and market performance, is sent to a Redhat server that runs big data analytics tools for next-day trading predictions.

A bash script is used to configure resources and schedule when to run the data analytics workloads.

How should the on-premise system be migrated to AWS with appropriate tools? (Select THREE)

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D. E.

Correct Answer - A, C and E.

There are several parts of the on-premise system.

The first is the place to store the data from several sources.

The second is the bash script that is used to schedule the data to analyze the task.

And the third part is the big data analysis.

All of these three parts need to be considered when being migrated.

Refer to the below chart as a reference.

Option A is CORRECT: Because S3 is an ideal place to store trade data as it is highly available, durable, and cost-efficient.

Option B is incorrect: Because the SQS queue is inappropriate to store source data.

The trade data is very large which needs a durable store such as S3.

Option C is CORRECT: Because AWS Batch is suitable to run a bash script using a job.

The AWS Batch scheduler evaluates when, where, and how to run jobs.

Option D is incorrect: Because you should set up compute resources through AWS Batch instead of ECS.

ECS is not supported by AWS Batch.

Option E is CORRECT: Because CloudWatch Events can be used to schedule and trigger AWS Batch jobs to perform data analytics.

Private IP: 10.0.0.6

‘Subnet A
10.0.0.024

VPC A- service consumer
10.0.0.016

The on-premise system that analyzes trade data can be migrated to AWS using several appropriate tools. Here are the explanations for the three selected answers:

A. Create an S3 bucket to store the trade data that is used for post-processing. Storing trade data in an S3 bucket is a good option as it provides a scalable, durable, and highly available object storage solution. This will enable the big data analytics tools to access the data and perform post-processing. The data can also be easily secured by using access control policies and versioning options.

C. Use AWS Batch to execute the bash script using a proper job definition. AWS Batch is a fully-managed service that allows you to run batch computing workloads at any scale. With AWS Batch, you can define and manage batch computing workloads using job definitions, and you can easily execute those jobs across a pool of EC2 instances. This option can be used to execute the bash script for configuring resources and scheduling when to run the data analytics workloads.

E. Use CloudWatch Events to schedule data analytics jobs. CloudWatch Events is a service that allows you to schedule events that trigger actions. You can use CloudWatch Events to schedule data analytics jobs at specific times or intervals. This will enable the big data analytics tools to perform next-day trading predictions based on the trade data that is collected during the previous day.

D. Use AWS ECS to handle the big data analytics workloads is incorrect as it is a container orchestration service that is used to deploy, manage, and scale containerized applications. It is not suitable for big data analytics workloads that require processing large amounts of data.

B. Send the trade data from various sources to a dedicated SQS queue is incorrect as Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. It is not suitable for trade data analysis use case as it doesn't perform any analysis on the data.