Optimizing Costs in a Machine Learning Environment | Best Practices | DP-100 Exam

Best Practices for Optimizing Costs in a Machine Learning Environment

Question

You are tasked to set up a ML environment for running experiments.

While configuring the compute environment for your experiments, your priority is to provision the necessary capacity but keeping costs as low as possible.

Which is not the way of optimizing costs?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer: B.

Option A is incorrect because using your own local environment while developing code generates no extra cost, not even for long execution times.

Option B is CORRECT because AKS is recommended for high-scale production environments.

Not the best way to keep costs low.

Option C is incorrect because in a cloud environment, utilizing the pay-as-you-go model, automatically launching and stopping computes is one of the best ways of preventing costs from “exploding”.

Option D is incorrect because in the cloud environment, you can set up compute clusters which provide the necessary compute power while scaling up and down automatically, according to the actual demand.

Reference:

In the context of setting up a ML environment for running experiments on Azure, optimizing costs involves minimizing the costs associated with compute resources while still ensuring that sufficient capacity is available for the experiments.

Let's consider each option and evaluate whether it helps in optimizing costs or not:

A. Develop code in local, low-cost environment: Developing code in a local, low-cost environment can help minimize the costs associated with running experiments, as it eliminates the need to provision expensive compute resources on Azure. However, this approach has some limitations, such as limited compute power, scalability, and access to Azure services, which might not make it feasible for some types of experiments. Therefore, this option is not ideal for all scenarios.

B. Use Azure Kubernetes Service: Azure Kubernetes Service (AKS) is a container orchestration service that helps to manage the deployment and scaling of containerized applications. While AKS can help optimize costs by allowing you to deploy containers on-demand, it may not be the best choice for experiments that require more significant compute resources since Kubernetes clusters can be expensive to run. Therefore, this option may not be the most cost-effective approach for running experiments with heavy computational requirements.

C. Use managed computes that start automatically on-demand and stop when not needed: Azure provides several options for managed computes that automatically start and stop based on demand. For instance, Azure Machine Learning compute clusters are managed computes that can scale based on workload and are ideal for running experiments. These clusters allow you to specify the maximum number of nodes to use, ensuring that you only pay for what you need. Therefore, this option can help optimize costs by ensuring that you only pay for the compute resources that you use.

D. Set automatic scaling for computes, based on the workload: Setting automatic scaling for computes can help optimize costs by ensuring that you only use the resources you need. For example, if you set the minimum number of nodes to 1 and the maximum to 10, the cluster will scale up and down based on the workload. This approach ensures that you are not over-provisioning resources and paying for idle resources. Therefore, this option can help optimize costs by ensuring that you only pay for the compute resources that you need.

In conclusion, option A is a valid approach but may not be suitable for all scenarios, while options C and D are both good ways to optimize costs when configuring the compute environment for experiments. Option B may not be the most cost-effective approach for experiments with heavy computational requirements.