AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, we empower organizations to easily:
- Use machine learning to build predictive models that optimize business processes
- Utilize advanced vision, speech, and language capabilities to build applications that deliver personalized and engaging experiences
- Apply knowledge mining to uncover latent insights from vast repositories of files
Building on our announcements at Microsoft Ignite in September, I’m excited to share several new announcements we are making at Microsoft Connect(); to enable organizations to easily apply AI to transform their businesses.
Azure Machine Learning service general availability
Today, we are happy to announce the general availability of Azure Machine Learning service. With Azure Machine Learning service, you can quickly and easily build, train, and deploy machine learning models anywhere from the intelligent cloud to the intelligent edge. With features like automated machine learning, organizations can accelerate their model development by identifying suitable algorithms and machine learning pipelines faster. This helps organizations significantly reduce development time, from days to hours. With hyper-parameter tuning, organizations can tune parameters to enhance model accuracy.
Once the model is developed, organizations can easily deploy and manage their models in the cloud and on the edge, including IoT devices, with integrated (CI/CD) tooling. Developers can use their favorite python development environments, such as Visual Studio Code, Visual Studio, PyCharm, Azure Databricks notebooks, or Jupyter notebooks. We are excited to see customers like TAL already transforming their business with Azure Machine Learning service.
“The life insurance industry is facing massive change, and at TAL, we want to be the leaders of that. That means leveraging all the new opportunities presented by data and machine learning. And Azure Machine Learning has enabled us to put these tools into production very quickly and get to market much quicker,” says Dan Taylor, General Manager of Innovation, TAL.
To learn more, please check out our announcement blog, “Announcing the general availability of Azure Machine Learning service.”
Open Neural Network Exchange (ONNX) Runtime now open source
Microsoft is committed to making AI more accessible to all organizations. We are excited to announce that the Open Neural Network Exchange (ONNX) Runtime is now open source. ONNX is an open format to represent machine learning models that enables data scientists and developers to use the frameworks and tools that work best for them, including PyTorch, TensorFlow, scikit-learn, and more. ONNX Runtime is the first inference engine that fully supports the ONNX specification and delivers an average of 2x in performance gains. Leading hardware companies such as Qualcomm, Intel and NVIDIA are actively working to integrate their custom accelerators into ONNX Runtime. To learn more, please check out our announcement blog, “ONNX Runtime is now open source.”
“The introduction of ONNX Runtime is a positive next step in further driving framework interoperability, standardization, and performance optimization across multiple device categories and we expect developers to welcome support for ONNX Runtime on Snapdragon mobile platforms,” says Gary Brotman, Senior Director, Qualcomm Technologies
Language Understanding now available in Azure Cognitive Services containers preview
Recently we announced the preview of Azure Cognitive Services containers, making it possible to build intelligent applications that span the cloud and the edge, including IoT devices. Today, we are excited to announce that Language Understanding is now available as part of the preview. With container support for Language Understanding, you can easily and quickly add cognitive capabilities — such as object detection, vision recognition, and language understanding — into your apps without having deep data science skills. Cognitive Services containers enable customers to build one application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality.
“Azure Cognitive Services containers give you more options on how you grow and deploy AI solutions, either on or off premises, with consistent performance. You can scale up as workload intensity increases or scale out to the edge,” says Andy Vargas, VP of Software and Services, Intel.
Custom translation capability general availability
Today, we are pleased to also announce the general availability of custom translation capability in Cognitive Services. Custom translation enables you to build customized neural machine translation (“NMT”) systems. This enables seamless integration into existing applications, workflows, and websites. With its general availability, custom translation builds on the strengths of the Translator Text Cognitive Service, which powers billions of translations every day and supports more than three dozen languages.
I look forward to speaking about these announcements and more during my keynote on AI at DevIntersection on December 6, 2018. As we continue to make Azure the best place for AI, we are excited to see how organizations will transform their businesses. The opportunities are limitless. Get started today.