Deploying Machine Learning Model on AKS | Azure DP-100 Exam

Deploying Your ML Model on AKS with Azure DP-100 Exam

Question

After successfully training your ML model and after selecting the best run, you are about to deploy it as a web service to the production environment.

Because you anticipate a massive amount of requests to be handled by the service, you choose AKS as a compute target.

You want to use the following script to deploy your model:

# deploy model from azureml.core.model import Model service = Model.deploy(workspace=ws,  name = 'my-inference-service',  models = [classification_model],  inference_config = my_inference_config,  deployment_config = my_deploy_config,  deployment_target = my_production_cluster) service.wait_for_deployment(show_output = True) 
After running the deployment script, you receive an error.

After a short investigation you find that an important setting is missing from the inference_config definition:

# inference config from azureml.core.model import InferenceConfig inference_config = InferenceConfig(runtime= "python",  source_directory = 'my_files',  <insert code here>,  conda_file="environment.yml") 
You decide to add <cluster_name = ‘aks-cluster'> Does this solve the problem??

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B.

Answer: B.

Option A is incorrect because the InferenceConfig object is used to combine two important things: the entry script and the environment definition.

The entry_script defines the path to the file that contains the ML code to execute, therefore it is missing and it must be set: entry_script="my_scoring.py".

Option B is CORRECT because the cluster_name parameter is actually important for the deployment, but it is part of the ComputeTarget configuration (which is, in your case, the my_production_cluster), i.e.

set elsewhere in your code.

Reference:

Adding the cluster_name = ‘aks-cluster' parameter to the InferenceConfig definition does not solve the problem because the cluster_name parameter is not a valid parameter for the InferenceConfig class.

The InferenceConfig class is used to define how the model should be served and is typically used to specify the entry script, any dependencies or packages needed, and the runtime environment. The cluster_name parameter, on the other hand, is used to specify the name of the AKS cluster where the deployment will take place, and should be specified in the deployment_config definition.

Therefore, the correct way to add the cluster_name parameter is to specify it in the deployment_config definition, as follows:

python
# deployment config from azureml.core.webservice import AksWebservice my_deploy_config = AksWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1, cluster_name = ‘aks-cluster')

This should allow the deployment script to deploy the model as a web service to the specified AKS cluster, assuming that all other necessary parameters and settings are correctly specified.