Azure Machine Learning - General Availability for Build
Published date: 23 May, 2023
Use AzureML Registries to share ML pipelines and models across teams and workspaces: You can now train a model in the “dev workspace and deploy it to the “test” and “prod” workspaces, all while tracking lineage across the entire lifecycle.
Optimize training with the Azure Container for PyTorch: You can now reduce set up costs and time with the pre-installed, validated, and up-to-date software.
Configure AzureML Spark to perform data wrangling: You can now perform data wrangling at scale in the AzureML ecosystem before training your machine learning model.
Mirror traffic for Managed Online Endpoints: You can now test new deployments with live (production) traffic without impacting your SLA.
Create and manage your schedules in AzureML studio: You can now create schedules to automatically run jobs on a regular basis.
Recover deleted workspace data by enabling soft delete: You can now work with an added layer of protection against accidental workspace deletion.
Audit and observe compute instance OS version: You can now track security patch compliance of a compute instance by showing versions and notifying for updates.
Read AzureML datastore URIs in pandas/dask (via fsspec integration): You can now read and perform explorative data analysis on AzureML-registered datastores and registered data assets without needing to learn AzureML-specific data concepts.
Enable Delta Support for Tabular Datasets: You can now read Delta Lake via AzureML MLTable without Spark clusters.
Debug and monitor your training jobs: You can now use SDKv2, CLIv2 or AzureML Studio Portal to quickly reserve your required compute resources with custom environment, access job container via different training applications to iterate on your code, monitor your training job, or debug your job remotely like you usually do on your local machines.