Classifying Iris

NOTE This content is no longer maintained. Visit the Azure Machine Learning Notebook project for sample Jupyter notebooks for ML and deep learning with Azure Machine Learning.

This is a companion sample project of the Azure Machine Learning QuickStart and Tutorials. Using the timeless Iris flower dataset, it walks you through the basics of preparing dataset, creating a model and deploying it as a web service.


Select local as the execution environment, and as the script, and click Run button. You can also set the Regularization Rate by entering 0.01 in the Arguments control. Changing the Regularization Rate has an impact on the accuracy of the model, giving interesting results to explore.

Exploring results

After running, you can check out the results in Run History. Exploring the Run History will allow you to see the correlation between the parameters you entered and the accuracy of the models. You can get individual run details by clicking a run in the Run History report or clicking the name of the run on the Jobs Panel to the right. In this sample you will have richer results if you have matplotlib installed.

Quick CLI references

If you want to try exercising the Iris sample from the command line, here are some things to try:

First, launch the Command Prompt or Powershell from the File menu. Then enter the following commands:

# first let's install matplotlib locally
$ pip install matplotlib

# log in to Azure if you haven't done so
$ az login

# kick off many local runs sequentially
$ python

Run Python script in local Python environment. $ az ml experiment submit -c local

Run Python script in a local Docker container. $ az ml experiment submit -c docker-python

Run PySpark script in a local Docker container. $ az ml experiment submit -c docker-spark

Create myvm run configuration to point to a Docker container on a remote VM ``` $ az ml computetarget attach remotedocker --name myvm --address --username --password

prepare the environment

$ az ml experiment prepare -c myvm ```

Run PySpark script in a Docker container (with Spark) in a remote VM: $ az ml experiment submit -c myvm

Create myhdi run configuration to point to an HDI cluster ``` $ az ml computetarget attach cluster --name myhdi --address --username --password

prepare the environment

$ az ml experiment prepare -c myhdi ```

Run in a remote HDInsight cluster: $ az ml experiment submit -c myhdi