New features for Azure Machine Learning are now available
Published date: May 06, 2019
Features include:
- Model Interpretability - Machine learning interpretability allows data scientists to explain machine learning models globally on all data, or locally on a specific data point using the state-of-art technologies in an easy-to-use and scalable fashion. Machine Learning interpretability incorporates technologies developed by Microsoft and proven third-party libraries (for example, SHAP and LIME). The SDK creates a common API across the integrated libraries and integrates Azure Machine Learning services. Using this SDK, you can explain machine learning models globally on all data, or locally on a specific data point using the state-of-art technologies in an easy-to-use and scalable fashion.
- Forecasting via AutomatedML, Automated ML advancements and AutomatedML supported on Databricks, CosmosDB & HDInsight –
- Automated ML automates parts of the ML workflow, reducing the time it takes to build ML models, freeing data scientists to focus on their important work, while simplifying ML and opening it up to a wider audience. We have announced:
- Forecasting is now GA, with new features
- Databricks, SQL, CosmosDB and HDInsight integrations
- Explainability is now GA, with improved performance
- .NET integration ML.NET 1.0 release is the first major milestone of a great journey in the open that started in May 2018 when we released ML.NET 0.1 as open source. Since then we've been releasing monthly, 12 preview releases plus this final 1.0 release. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more. You can use NimbusML, the ML.NET Python bindings, to use ML.NET with Azure Machine Learning. NimbusML enables data scientists to use ML.NET to train models in Azure Machine Learning or anywhere else they use Python. The trained machine learning model can easily be used in a .NET application with the ML.NET PredictionEngine like this example.
- First class Azure DevOps support for experiments, pipelines, model registration, validation and deployment : Azure Machine Learning has a mission to simplify the end to end machine learning lifecycle, including data prep, model training, model packaging, validation and model deployment. To enable this, we are launching the following services:
- Environment, Code & Data versioning services, integrated into the Azure ML Audit Trail,
- The Azure DevOps extension for Machine Learning & the Azure ML CLI and
- A simplified experience for validating and deploying ML models. Microsoft enables you to adopt ML quickly by accelerating your time to a production-ready, cloud-native ML solution. Production readiness is defined as:
- Reproducible model training pipelines
- Provably validate, profile and track model before release
- Enterprise-class rollout and integrated observability including all necessary respecting all appropriate security guidelines
- ONNX Runtime w/ TensorRT : We are excited to announce the GA of ONNX Runtime the NVIDIA TensorRT execution provider in ONNX Runtime, enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, and many other popular frameworks. ONNX Runtime together with its TensorRT execution provider accelerates the inferencing of deep learning models on NVIDIA hardware. This enables developers to run ONNX models across different flavors of hardware and build applications with the flexibility to target different hardware configurations. The architecture abstracts out the details of the hardware specific libraries that are essential to optimizing the execution of deep neural networks.
- FPGA based Hardware Accelerated Models : FPGAs are a machine learning inferencing option, based on Project Brainwave, a hardware architecture from Microsoft. Data scientists and developers can use FPGAs to accelerate real-time AI calculations. These Hardware Accelerated Models are now generally available in the cloud, along with a preview of models deployed to Data Box Edge. FPGAs offer performance, flexibility, and scale and are available only through Azure Machine Learning. They make it possible to achieve low latency for real-time inferencing requests, mitigating the need for asynchronous requests (batching).