8 min read
The potential of generative AI is much bigger than any of us can imagine today. From healthcare to manufacturing to retail to education, AI is transforming entire industries and fundamentally changing the way we live and work. At the heart of all that innovation are developers, pushing the boundaries of possibility and creating new business and societal value even faster than many thought possible. Trusted by organizations around the world with mission-critical application workloads, Azure is the place where developers can build with generative AI securely, responsibly, and with confidence.
Welcome to Microsoft Build 2023—the event where we celebrate the developer community. This year, we’ll dive deep into the latest technologies across application development and AI that are enabling the next wave of innovation. First, it’s about bringing you state-of-the-art, comprehensive AI capabilities and empowering you with the tools and resources to build with AI securely and responsibly. Second, it’s about giving you the best cloud-native app platform to harness the power of AI in your own business-critical apps. Third, it’s about the AI-assisted developer tooling to help you securely ship the code only you can build.
We’ve made announcements in all key areas to empower you and help your organizations lead in this new era of AI.
Bring your data to life with generative AI
Generative AI has quickly become the generation-defining technology shaping how we search and consume information every day, and it’s been wonderful to see customers across industries embrace Microsoft Azure OpenAI Service. In March, we announced the preview of OpenAI’s GPT-4 in Azure OpenAI Service, making it possible for developers to integrate custom AI-powered experiences directly into their own applications. Today, OpenAI’s GPT-4 is generally available in Azure OpenAI Service, and we’re building on that announcement with several new capabilities you can use to apply generative AI to your data and to orchestrate AI with your own systems.
We’re excited to share our new Azure AI Studio. With just a few clicks, developers can now ground powerful conversational AI models, such as OpenAI’s ChatGPT and GPT-4, on their own data. With Azure OpenAI Service on your data, coming to public preview, and Azure Cognitive Search, employees, customers, and partners can discover information buried in the volumes of data, text, and images using natural language-based app interfaces. Create richer experiences and help users find organization-specific insights, such as inventory levels or healthcare benefits, and more.
To further extend the capabilities of large language models, we are excited to announce that Azure Cognitive Search will power vectors in Azure (in private preview), with the ability to store, index, and deliver search applications over vector embeddings of organizational data including text, images, audio, video, and graphs. Furthermore, support for plugins with Azure OpenAI Service, in private preview, will simplify integrating external data sources and streamline the process of building and consuming APIs. Available plugins include plugins for Azure Cognitive Search, Azure SQL, Azure Cosmos DB, Microsoft Translator, and Bing Search. We are also enabling a Provisioned Throughput Model, which will soon be generally available in limited access to offer dedicated capacity.
Customers are already benefitting from Azure OpenAI Service today, including DocuSign, Volvo, Ikea, Crayon, and 4,500 others. Learn more about what’s new with Azure OpenAI Service.
We continue to innovate across our AI portfolio, including new capabilities in Azure Machine Learning, so developers and data scientists can use the power of generative AI with their data. Foundation models in Azure Machine Learning, now in preview, empower data scientists to fine-tune, evaluate, and deploy open-source models curated by Azure Machine Learning, models from Hugging Face Hub, as well as models from Azure OpenAI Service, all in a unified model catalog. This will provide data scientists with a comprehensive repository of popular models directly within the Azure Machine Learning registry.
We are also excited to announce the upcoming preview of Azure Machine Learning prompt flow that will provide a streamlined experience for prompting, evaluating, tuning, and operationalizing large language models. With prompt flow, you can quickly create prompt workflows that connect to various language models and data sources. This allows for building intelligent applications and assessing the quality of your workflows to choose the best prompt for your case. See all the announcements for Azure Machine Learning.
It’s great to see momentum for machine learning with customers like Swift, a member-owned cooperative that provides a secure global financial messaging network, who is using Azure Machine Learning to develop an anomaly detection model with federated learning techniques, enhancing global financial security without compromising data privacy. We cannot wait to see what our customers build next.
Run and scale AI-powered, intelligent apps on Azure
Azure’s cloud-native platform is the best place to run and scale applications while seamlessly embedding Azure’s native AI services. Azure gives you the choice between control and flexibility, with complete focus on productivity regardless of what option you choose.
Azure Kubernetes Service (AKS) offers you complete control and the quickest way to start developing and deploying intelligent, cloud-native apps in Azure, datacenters, or at the edge with built-in code-to-cloud pipelines and guardrails. We’re excited to share some of the most highly anticipated innovations for AKS that support the scale and criticality of applications running on it.
To give enterprises more control over their environment, we are announcing long-term support for Kubernetes that will enable customers to stay on the same release for two years—twice as long as what’s possible today. We are also excited to share that starting today, Azure Linux is available as a container host operating system platform optimized for AKS. Additionally, we are now enabling Azure customers to access a vibrant ecosystem of first-party and third-party solutions with easy click-through deployments from Azure Marketplace. Lastly, confidential containers are coming soon to AKS, as a first-party supported offering. Aligned with Kata Confidential Containers, this feature enables teams to run their applications in a way that supports zero-trust operator deployments on AKS.
Azure lets you choose from a range of serverless execution environments to build, deploy, and scale dynamically on Azure without the need to manage infrastructure. Azure Container Apps is a fully managed service that enables microservices and containerized applications to run on a serverless platform. We announced, in preview, several new capabilities for teams to simplify serverless application development. Developers can now run Azure Container Apps jobs on demand and schedule applications and event-driven ad hoc tasks to asynchronously execute them to completion. This new capability enables smaller executables within complex jobs to run in parallel, making it easier to run unattended batch jobs right along with your core business logic. With these advancements to our container and serverless products, we are making it seamless and natural to build intelligent cloud-native apps on Azure.
Integrated, AI-based tools to help developers thrive
Making it easier to build intelligent, AI-embedded apps on Azure is just one part of the innovation equation. The other, equally important part is about empowering developers to focus more time on strategic, meaningful work, which means less toiling on tasks like debugging and infrastructure management. We’re making investments in GitHub Copilot, Microsoft Dev Box, and Azure Deployment Environments to simplify processes and increase developer velocity and scale.
GitHub Copilot is the world’s first at-scale AI developer tool, helping millions of developers code up to 55 percent faster. Today, we announced new Copilot experiences built into Visual Studio, eliminating wasted time when getting started with a new project. We’re also announcing several new capabilities for Microsoft Dev Box, including new starter developer images and elevated integration of Visual Studio in Microsoft Dev Box, that accelerates setup time and improves performance. Lastly, we’re announcing the general availability of Azure Deployment Environments and support for HashiCorp Terraform in addition to Azure Resource Manager.
Enable secure and trusted experiences in the era of AI
When it comes to building, deploying, and running intelligent applications, security cannot be an afterthought—developer-first tooling and workflow integration are critical. We’re investing in new features and capabilities to enable you to implement security earlier in your software development lifecycle, find and fix security issues before code is deployed, and pair with tools to deploy trusted containers to Azure.
We’re pleased to announce GitHub Advanced Security for Azure DevOps in preview soon. This new solution provides the three core features of GitHub Advanced Security into the Azure DevOps platform, so you can integrate automated security checks into your workflow. It includes code scanning powered by CodeQL to detect vulnerabilities, secret scanning to prevent the inclusion of sensitive information in code repositories, and dependency scanning to identify vulnerabilities in open-source dependencies and provide update alerts.
While security is at the top of the list for any developer, using AI responsibly is no less important. For almost seven years, we have invested in a cross-company program to ensure our AI systems are responsible by design. Our work on privacy and the General Data Protection Regulation (GDPR) has taught us that policies aren’t enough; we need tools and engineering systems that help make it easy to build with AI responsibly. We’re pleased to announce new products and features to help organizations improve accuracy, safety, fairness, and explainability across the AI development lifecycle.
Azure AI Content Safety, now in preview, enables developers to build safer online environments by detecting and assigning severity scores to unsafe images and text across languages, helping businesses prioritize what content moderators review. It can also be customized to address an organization’s regulations and policies. As part of Microsoft’s commitment to responsible AI, we’re integrating Azure AI Content Safety across our products, including Azure OpenAI Service and Azure Machine Learning, to help users evaluate and moderate content in prompts and generated content.
Additionally, the responsible AI dashboard in Azure Machine Learning now supports text and image data in preview. This means users can more easily identify model errors, understand performance and fairness issues, and provide explanations for a wider range of machine learning model types, including text and image classification and object detection scenarios. In production, users can continue to monitor their model and production data for model and data drift, perform data integrity tests, and make interventions with the help of model monitoring, now in preview.
We are committed to helping developers and machine learning engineers apply AI responsibly, through shared learning, resources, and purpose-built tools and systems. To learn more, join us at the Building and using AI models responsibly breakout session and download our Responsible AI Standard.
Let’s write this history, together
AI is a massive shift in computing. Whether it is part of your workflow or part of cloud development, powering your next-generation, intelligent apps, this community of developers is leading this shift.
We are excited to bring Microsoft Build to you, especially this year as we go deep into the latest AI technologies, connect you with experts from within and outside of Microsoft, and showcase real-world solutions powered by AI.