Deploying ML Model on AKS with Azure ML | DP-100 Exam Solution

Deploying ML Model on AKS with Azure ML

Question

After successfully training your ML model and after selecting the best run, you are about to deploy it as a web service to the production environment.

Because you anticipate a massive amount of requests to be handled by the service, you choose AKS as a compute target.

You want to use the following script to deploy your model:

# deploy model from azureml.core.model import Model service = Model.deploy(workspace=ws,  name = 'my-inference-service',  models = [classification_model],  inference_config = my_inference_config,  deployment_config = my_deploy_config,  deployment_target = my_production_cluster) service.wait_for_deployment(show_output = True) 
After running the deployment script, you receive an error.

After a short investigation you find that an important setting is missing from the inference_config definition:

# inference config from azureml.core.model import InferenceConfig inference_config = InferenceConfig(runtime= "python",  source_directory = 'my_files',  <insert code here>,  conda_file="environment.yml") 
You decide to add <entry_script="my_scoring.py"> Does this solve the problem??

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B.

Answer: A.

Option A is CORRECT because the InferenceConfig object is used to combine two important things: the entry script and the environment definition.

The entry_script defines the path to the file that contains the ML code to execute, therefore it must be set.

Option B is incorrect because the entry_script parameter is actually missing from the InferenceConfig definition.

Adding it does solve the problem.

Reference:

Based on the given information, adding the entry script parameter to the inference_config definition may solve the problem.

The error message is not provided in the question, so it is unclear what the specific issue is. However, based on the information provided, it is likely that the missing setting in the inference_config definition is the entry script, which is responsible for defining the code that will be used to score the model. Without this setting, the model deployment will fail because the scoring code is missing.

By adding the entry_script parameter to the inference_config definition, the scoring code will be specified, which should allow the model to be deployed successfully. The my_scoring.py file should contain the scoring logic for the model, such as how the input data will be processed and how the model will make predictions.

Therefore, the answer is A. Yes, adding the entry_script parameter should solve the problem, assuming that the missing setting in the inference_config definition is indeed the entry script.