The most critical promise of our identity services is ensuring that every user can access the apps and services they need without interruption. We’ve been strengthening this promise to you through a multi-layered approach, leading to our improved promise of 99.99 percent authentication uptime for Azure Active Directory (Azure AD).
The exponential growth of datasets has resulted in growing scrutiny of how data is exposed—both from a consumer data privacy and compliance perspective. In this context, confidential computing becomes an important tool to help organizations meet their privacy and security needs surrounding business and consumer data.
Microsoft’s cloud supply chain is essential to deliver the infrastructure—servers, storage, and networking gear—that enables cloud reliability and growth. Our vision is for cloud capacity to be available like a utility so that customers can seamlessly turn it on when and where they need it.
Now, in addition to getting a fast notification when a VM’s availability is impacted, customers can expect a root cause to be added at a later point once our automated Root Cause Analysis (RCA) system identifies the failing Azure platform component that led to the VM failure.
We created the Azure Well-Architected Framework to help improve the quality of your workloads, and reliability is one of its five core pillars so for the latest post in our series, I have asked Cloud Advocate David Blank-Edelman to run through how best to approach using the framework to guide your conversations and design decisions in this space.
All service engineering teams in Azure are already familiar with postmortems as a tool for better understanding what went wrong, how it went wrong, and the customer impact of the related outage. For today’s post in our Advancing Reliability blog series, we share insights into our journey as we work towards advancing our postmortem and resiliency threat modeling processes.
The continuous monitoring of health metrics is a fundamental part of this process, and this is where AIOps plays a critical role. In the post that follows, we introduce how AI and machine learning are used to empower DevOps engineers, monitor the Azure deployment process at scale, detect issues early, and make rollout or rollback decisions based on impact scope and severity.
There are many factors that can affect critical environment infrastructure availability—the reliability of the infrastructure building blocks, the controls during the datacenter construction stage, effective health monitoring and event detection schemes, a robust maintenance program, and operational excellence to ensure that every action is taken with careful consideration of related risk implications.
If you ask three people what a service is, you may get three different answers. At Microsoft, we define a service (business process or technology) as a means of delivering value to customers (first or third party) by facilitating outcomes customers want to achieve.
At Microsoft Ignite, I presented the “Inside Azure Datacenter Architecture” session to give a tour of the latest innovations around how Azure enables intelligent, modern, and innovative applications at scale in the cloud, on-premises, and on the edge.