Azure DP-100: Designing and Implementing a Data Science Solution

Cannot Be Used for Adding Log Metrics to the Run Object

Question

Runs of machine learning experiments produce a lot of metrics and outputs which you want to track across several runs, for evaluation reasons.

Logging the relevant metrics helps you diagnose errors and tracking performance metrics.

If you want to add named metrics to the runs, you can do it via the several logging methods of the Run object.

Which one of the following cannot be used for adding log metrics to the Run object?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer: C.

Option A is incorrect because the log() function can be used to record a numerical or string value to the run with the given name.

Option B is incorrect becauseit is a valid function that creates a metric with multiple columns.

It can be called once to record only one row or multiple times in a loop to generate a table.

Option C is CORRECT because the run.get_metrics() is not for writing logs.

It is used to get the user metrics of a trained model.

Option D is incorrect because it is a valid function that logs an image file or a matplotlib plot to the run.

Reference:

In Azure Machine Learning, when you run machine learning experiments, you can use the Run object to log relevant metrics and outputs during the runs. These metrics help in evaluating and tracking the performance of the model. To add named metrics to the runs, you can use the several logging methods provided by the Run object.

Let's take a closer look at each of the logging methods mentioned in the question:

A. log() - This method logs a single named value, given the name and value as arguments. This method can be used to log any numerical value, such as accuracy, loss, or any other metric.

B. log_row() - This method logs a row of named values, given the row as a dictionary. This method can be used to log a set of related metrics, such as precision, recall, and F1 score, all in one row.

C. get_metrics() - This method returns a dictionary of all the logged metrics for the current run. This method can be used to retrieve all the metrics logged during the run, which can be useful for evaluating the performance of the model.

D. log_image() - This method logs an image to the run, given a file path or a PIL image object. This method can be used to log visualizations, such as plots or heatmaps, that help in analyzing the performance of the model.

Based on the above explanations, we can see that all the methods listed in the question can be used for adding log metrics to the Run object, except for option C. get_metrics() does not add any new metrics to the Run object, instead, it returns a dictionary of all the logged metrics for the current run. Therefore, option C is the correct answer to the question.