AutoML Controls for Cost and Time Optimization in ML Studio - DP-100 Exam Answer

Automated Machine Learning Controls for Cost and Time Optimization | DP-100 Exam

Question

You need to run several autoML experiments and you want to keep your costs under control, as well as minimize the running times.

ML Studio provides controls to achieve these goals.

Which two autoML controls should you use?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answers: A and C.

Option A is CORRECT because by using the Training job time exit criterion, you can limit the duration of the training runs.

Option B is incorrect becausesetting the Max concurrent iterations higher 1 results in running multiple jobs in parallel.

Without either limiting the running time or setting exit criterion it doesn't comply with the requirements.

Option C is CORRECT because by setting the Metric score threshold exit criterion you can tell the pipelines to stop running as soon as the minimum value of the primary metric is reached.

Option D is incorrect because the metric used for scoring the model has nothing to do with the running time or costs.It can be used in the exit criteria to set a threshold for it.

Reference:

Sure, I'd be happy to explain each of these options and their relevance to controlling costs and minimizing running times in an autoML experiment in Azure ML Studio.

A. Set exit criterion Training job time: This option allows you to set a maximum amount of time that the autoML experiment will run for. Once this time limit is reached, any running experiments will be stopped and their results will be recorded. This is useful for controlling costs, as it ensures that the experiment will not continue running indefinitely and racking up unnecessary compute costs. It is also useful for minimizing running times, as it prevents the experiment from continuing to run even after the desired results have been achieved.

B. Set Max concurrent iterations to 4: This option allows you to set a maximum number of experiments that can be run simultaneously. By limiting the number of concurrent experiments, you can prevent compute resources from being overutilized and potentially causing slower overall performance. This is useful for controlling costs, as it ensures that you are not running more experiments than necessary and potentially wasting compute resources. It is also useful for minimizing running times, as it prevents resource contention and allows each experiment to run more efficiently.

C. Set criterion Metric score threshold: This option allows you to set a minimum threshold for the evaluation metric that is used to evaluate the performance of each experiment. Once this threshold is reached, the experiment will be stopped and its results will be recorded. This is useful for controlling costs, as it ensures that you are not continuing to run experiments that are not meeting your desired performance threshold. It is also useful for minimizing running times, as it prevents the experiment from continuing to run even after it has achieved satisfactory results.

D. Set Primary metric to “AUC_weighted”: This option allows you to specify the evaluation metric that will be used to evaluate the performance of each experiment. In this case, "AUC_weighted" is a metric that is commonly used for binary classification tasks. While this option is not directly related to controlling costs or minimizing running times, it is important for ensuring that the experiment is evaluating performance in a meaningful way and can help to guide the selection of the best model.

In summary, the two options that are most directly related to controlling costs and minimizing running times are A. Set exit criterion Training job time and B. Set Max concurrent iterations to 4. However, it is also important to carefully select evaluation metrics (option D) and use appropriate stopping criteria (option C) in order to ensure that the autoML experiment is both efficient and effective.