This year’s Microsoft Ignite brings us together to experience AI transformation in action. AI is driving a new wave of innovation, rapidly changing what applications look like, how they’re designed and built, and how they’re delivered. At the same time, business leaders continue to face challenges, needing to juggle various priorities to offset rising costs, be sustainable, and outmaneuver economic uncertainty. Today’s customers are looking for AI solutions that will meet all their needs.
At Ignite, we’re announcing innovation in Microsoft Azure that is powering more AI capabilities for our customers and helping enterprises with their cloud management and operations. We’re committed to bringing your AI ambitions to production and meeting you where you are. Whether you choose to build hybrid, cloud-native, or open source solutions, we’re rapidly expanding our infrastructure and adding intuitive tools for customers to help take your ideas to production safely and responsibly in this new era of AI.
With Azure, you can trust that you are on a secure and well-managed foundation to utilize the latest advancements in AI and cloud-native services. Azure is adaptive and purpose-built for all your workloads, helping you seamlessly unify and manage all your infrastructure, data, analytics, and AI solutions.
Powering groundbreaking AI solutions
The era of AI has largely been shaped by an exponential growth in the sophistication of large language models like OpenAI’s GPT trained on trillions of parameters and groundbreaking generative AI services like Bing Chat Enterprise and Microsoft Copilot used by millions of people globally. The leadership by Azure in optimizing infrastructure for AI workloads in the cloud is pioneering this innovation and why customers like OpenAI, Inflection, and Adept are choosing Azure to build and run AI solutions.
In this new era of AI, we are redefining cloud infrastructure, from silicon to systems, to prepare for AI in every business, in every app, for everyone. At Ignite, we’re introducing our first custom AI accelerator series, Azure Maia, designed to run cloud-based training and inferencing for AI workloads such as OpenAI models, Bing, GitHub Copilot, and ChatGPT. Maia 100 is the first generation in the series, with 105 billion transistors, making it one of the largest chips on 5nm process technology. The innovations for Maia 100 span across the silicon, software, network, racks, and cooling capabilities. This equips the Azure AI infrastructure with end-to-end systems optimization tailored to meet the needs of groundbreaking AI such as GPT.
Chips designed for the era of AI
Expand your innovation with our first generation #AzureCobalt and #AzureMaia chips
Alongside the Maia 100, we’re introducing our first custom in-house central processing unit series, Azure Cobalt, built on Arm architecture for optimal performance or watt efficiency, powering common cloud workloads for the Microsoft Cloud. From in-house silicon to systems, Microsoft now optimizes and innovates at every layer in the infrastructure stack. Cobalt 100, the first generation in the series, is a 64-bit 128-core chip that delivers up to 40 percent performance improvement over current generations of Azure Arm chips and is powering services such as Microsoft Teams and Azure SQL.
Networking innovation runs across our first-generation Maia 100 and Cobalt 100 chips. From hollow core fiber technology to the general availability of Azure Boost, we’re enabling faster networking and storage solutions in the cloud. You can now achieve up to 12.5 GBs throughput, 650K input output operations per second (IOPs) in remote storage performance to run data-intensive workloads, and up to 200 GBs in networking bandwidth for network-intensive workloads.
We continue to build our AI infrastructure in close collaboration with silicon providers and industry leaders, incorporating the latest innovations in software, power, models, and silicon. Azure works closely with NVIDIA to provide NVIDIA H100 Tensor Core (GPU) graphics processing unit-based virtual machines (VMs) for mid to large-scale AI workloads, including Azure Confidential VMs. On top of that, we are adding the latest NVIDIA H200 Tensor Core GPU to our fleet next year to support larger model inferencing with no reduction in latency.
As we expand our partnership with AMD, customers can access AI-optimized VMs powered by AMD’s new MI300 accelerator early next year. This demonstrates our commitment to adding optionality for customers in price, performance, and power for all of their unique business needs.
These investments have allowed Azure to pioneer performance for AI supercomputing in the cloud and have consistently ranked us as the number one cloud in the top 500 of the world’s supercomputers. With these additions to the Azure infrastructure hardware portfolio, our platform enables us to deliver the best performance and efficiency across all workloads.
Being adaptive and purpose-built for your workloads
We’ve heard about your challenges in migrating workloads to the public cloud, especially for mission-critical workloads. We continue to work with the technology vendors you’ve relied on to run your workloads and ensure Azure is supporting your needs such as SAP, VMware, NetApp, RedHat, Citrix, and Oracle. We’re excited about our recent partnership to bring Oracle Database Services into Azure to help keep your business efficient and resilient.
At Ignite, we’re announcing the general availability of Oracle Database@Azure in the US East Azure region as of December 2023. Customers will now have direct access to Oracle database services running on Oracle Cloud Infrastructure (OCI) deployed in Azure data centers. The new service will deliver all the performance, scale, and workload availability advantages of Oracle Exadata Database Service on OCI combined with the security, flexibility, and best-in-class services of Azure. Microsoft is the only other hyper scaler to offer OCI Database Services to simplify cloud migration, multicloud deployment, and management.
As we’ve observed through our interactions the durable state of the cloud is evolving to one where customer workloads need to be supported wherever they’re needed. We realize that cloud migration is not a one-size-fits-all approach, and that’s why we’re committed to meeting you where you are on your cloud journey. An adaptive cloud enables you to thrive in dynamic environments by unifying siloed teams, distributed sites, and sprawling systems into a single operations, application, and data model in Azure.
Our vision for adaptive cloud builds on the work we’ve already started through Azure Arc. With Azure Arc, customers can project their on-premises, edge, and multicloud resources to Azure, deploy Azure native services on those resources, and extend Azure services to the edge.
We’re excited to make some new announcements that will help customers implement their adaptive cloud strategies. For VMware customers, we’re announcing the general availability of VMware vSphere enabled by Azure Arc. Azure Arc brings together Azure and the VMware vSphere infrastructure enabling VM administrators to empower their developers to use Azure technologies with their existing server-based workloads and new Kubernetes workloads all from Azure. Additionally, we’re delighted to share the preview of Azure IoT Operations enabled by Azure Arc. By using Azure IoT Operations, customers can greatly reduce the complexity and time it takes to build an end-to-end solution that empowers them to make near real-time decisions backed by AI-driven insights to run agile, resilient, and sustainable operations with both Microsoft and partner technologies.
Amplifying your impact with AI-enhanced operations
Every day, cloud administrators and IT professionals are being asked to do more. We consistently hear from customers that they’re tasked with a wider range of operations, collaborating and managing more users, supporting more complex needs to deliver on increasing customer demand and integrating more workloads into their cloud environment.
That’s why we’re excited to introduce the public preview of Microsoft Copilot for Azure, a new solution built into Azure that helps simplify how you design, operate, or troubleshoot apps and infrastructure from cloud to edge. Learn how to apply for access to Microsoft Copilot for Azure to see how this new AI companion can help you generate deep insights instantly, discover new cloud functionality, and do complex tasks faster.
Enabling limitless innovation in the era of AI
Delivering on the promise of advanced AI for our customers requires high computing infrastructure, services, and expertise—things that can only be addressed with the scale and agility of the Microsoft Cloud. Our unique equipment and system designs help us and customers like you meet the challenges of the ever-changing technological landscape. From increasing the lifecycle of our hardware and running efficient supply chain operations to providing purpose-built infrastructure in this new era of AI, we can ensure we’re always here to bring your ideas to life in a safe and responsible way.
Learn more about the benefits of Azure infrastructure capabilities at Ignite
Attend these sessions at Ignite to learn more:
- Do more with Windows Server and SQL Server on Azure
- Simplifying cloud operations with Microsoft Copilot for Azure
- Unlock AI innovation with Azure AI infrastructure
Check out these resources to help you get started:
- Learn more about Azure Migrate and Modernize and Azure Innovate and how they can help you from migration to AI innovation.
- Check out the new and free Azure Migrate application and code assessment feature to save on application migrations.
- Find out how to take your AI ambitions from ideation to reality with Azure.
- Explore what’s next at Ignite.