Ignora esplorazione

Cronologia stato di Azure

Prodotto:

Area:

Data:

febbraio 2019

11/2

RCA - Azure Kubernetes Service - East US

Summary of impact: Between 15:00 and 22:05 UTC on 11 Feb 2019, a subset of customers using Azure Kubernetes Service (AKS) in East US may have experienced the following symptoms:

  • Intermittent failures with cluster service management operations - such as create, update, delete, and scaling. 
  • Issues performing workload management operations.
  • Brief cluster downtime.

During this time, cluster etcd state may have been rolled back and restored from the previous backup (AKS makes these backups automatically every 30 minutes).
Additionally, between 15:00 and 17:45 UTC, new clusters could not be created in this region.

Root cause: Engineers determined that during a recent deployment of new capacity to the backend services responsible for managing deployments and maintenance on customer clusters encountered an error. Specifically, the state store in the East US region was unintentionally modified during troubleshooting of a capacity expansion and caused the system to think that previous deployments of the service did not exist. When the engineers ran automation for adding capacity to the region, it used the modified state store and tried to create additional resources over the top of existing ones. This led to the backend configuration and networking infrastructure needed for Kubernetes to function properly to be recreated. Because of this, the various reconciliation loops that keep customer clusters in sync became stuck when trying to fetch their configuration and it had changed. This caused outages for customers as the backend services were in a stuck state and not reconciling data as needed.

Mitigation: Engineers removed the unhealthy resources from rotation so no new customer resources would be created on the affected clusters. Backend data was rolled back to its previous state and the reconciliation loops for all the resources were restarted as we began to see customer resources recover. Due to the number of changes in the backend data and restarts in the reconciliation loops caused by these changes, several customer resources were stuck with the incorrect state. Engineers restarted customer resources which cleared out stale data and allowed resources to recover.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

  • An additional validation step before new capacity is rolled out by our automation tools [COMPLETED]
  • Segmentation of previous capacity expansions in the state store to avoid unintentional modification when adding new capacity [IN PROGRESS]

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey:

8/2

RCA - Azure IoT Hub

Summary of impact: Between 08:16 and 14:12 UTC on 08 Feb 2019, a subset of customers using Azure IoT Hub may have received failure notifications or high latency when performing service management operations via the Azure Portal or other programmatic methods.

Workaround: Retrying the operation might have succeeded.

Root cause and mitigation: Engineers found that one of the SQL queries automatically started using a bad query plan causing the SQL DB to start consuming a significantly large percentage of database transaction units (DTUs). This resulted in the client-side connection pools to max out on the concurrent connections to the SQL DB. As a result, any new connections that were being attempted were failing. The operations which were in progress were also taking a long time to finish causing higher end to end latency. The first mitigation step performed was to increase the connection pool size on the clients. This was followed by scaling up the SQL Azure DB and almost doubling the DTUs. The scale up resulted in the bad query plan to get evicted and the good plan to get cached. All incoming calls started passing after this.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

  • Investigate and drop the offending query plan [in progress].
  • Investigate feasibility of auto scale up of SQL DB in use [in progress].
  • Implement administrative command to evict a bad query plan [in progress].
  • Add troubleshooting guides so the on-call personnel are better equipped to deal with this in future and can mitigate this much more quickly [in progress].

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey

1/2

Active Directory - Mitigated

Between 08:05 and 10:00 UTC on 01st Feb 2019, a small subset of users in certain countries in Europe including France, Netherlands, Hungary, Czech Republic may have experienced intermittent issues while accessing functionality in Azure Portal, Azure Active Directory B2C, Azure Active Directory Privileged Identity Management, Managed Service Identity, Azure RBAC and Microsoft Teams.

Engineering team have applied mitigation and all impacted services have been recovered 10:00 UTC.

Engineers are continuing to monitor the health of all impacted services to ensure full mitigation.

gennaio 2019

30/1

RCA - Intermittent Network Related Timeouts - West US

Summary of impact: Between 00:53 UTC and 03:33 UTC on 30 Jan 2019, a subset of customers in West US may have experienced degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Applications and resources that retried connections or requests may have succeeded due to multiple redundant network devices and routes available within Azure data centers.

Workaround: Retries likely to succeed via alternate network device and routes. Services in other availability zones unaffected.

Root cause: A network device in the US West region experienced a FIB/Linecard problem during a routine firmware update. As a result of the invalid programming, this device was unable to forward all dataplane traffic it was expected to carry, and dropped traffic to some destinations. The Azure network fabric failed to automatically remove this device from service and the device dropped a subset of the outbound network traffic passed through this device. For the duration of the incident, approximately 6% of the available network links in and out of the impacted datacenter were partially impacted. The network device encountered a previously unknown software issue,  and was unable to fully program all linecards on it. This led to the device advertising prefixes that were not reachable. 

Mitigation: The impacted device was identified and manually removed from service by Azure engineers, the network automatically recovered utilizing alternate network devices. The automated process for updating configuration and firmware detected that the devices had become unhealthy after the update and ceased updating any additional devices.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

  • Additional monitoring to detect repeats of this device issue (complete)
  • Improvement in the system for validating device health after changes to validate fib/rib status. (In Progress)
  • Alerting to detect traffic imbalances on upstream traffic (complete)
  • Process updates to add manual checks for this impacting scenario (complete)

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey: 

29/1

RCA - Network Infrastructure Event

Summary of impact: Between 21:00 UTC on 29 Jan 2019 and 00:10 UTC on 30 Jan 2019, customers may have experienced issues accessing Microsoft Cloud resources, as well as intermittent authentication issues across multiple Azure services affecting the Azure Cloud, US Government Cloud, and German Cloud. This issue was mitigated for the Azure Cloud at 23:05 UTC on 29 Jan 2019. Residual impact to the Azure Government Cloud was mitigated at 00:00 UTC on 30 Jan 2019 and to the German Cloud at 00:10 UTC on 30 Jan 2019. 

Azure AD authentication in the Azure Cloud was impacted between 21:07 - 21:48 UTC, and MFA was impacted between 21:07 - 22:25 UTC.  

Customers using a subset of other Azure services including: Microsoft Azure portal, Azure Data Lake Store, Azure Data Lake Analytics, Application Insights, Azure Log Analytics, Azure DevOps, Azure Resource Graph, Azure Container Registries, and Azure Machine Learning may have experienced intermittent inability to access service resources during the incident. A limited subset of customers using SQL transparent data encryption with bring your own key support may have had their SQL database dropped after Azure Key Vault was not reachable. The SQL service restored all databases.

For customers using Azure Monitor, there was a period of time where alerts, including Service Health alerts, did not fire. Azure internal communications tooling was also affected by the external DNS incident. As a result, we were delayed in publishing communications to the Service health status on the Azure Status Dashboard. Customers may have also been unable to log into their Azure Management Portal to view portal communications.

Root cause and mitigation:

Root Cause: An external DNS service provider experienced a global outage after rolling out a software update which exposed a data corruption issue in their primary servers and affected their secondary servers, impacting network traffic. A subset of Azure services, including Azure Active Directory were leveraging the external DNS provider at the time of the incident and subsequently experienced a downstream service impact as a result of the external DNS provider incident. Azure services that leverage Azure DNS were not impacted, however, customers may have been unable to access these services because of an inability to authenticate through Azure Active Directory. An extremely limited subset of SQL databases using "bring your own key support" were dropped after losing connectivity to Azure Key Vault. As a result of losing connectivity to Azure Key Vault, the key is revoked, and the SQL database dropped.

Mitigation: DNS services were failed over to an alternative DNS provider which mitigated the issue. This issue was mitigated for the Azure Cloud at 23:05 UTC on 29 Jan 2019. Residual impact to the Azure Government Cloud was mitigated at 00:00 UTC on 30 Jan 2019 and to the German Cloud at 00:10 UTC on 30 Jan 2019. Authentication requests which occurred prior to the routing changes may have failed if the request was routed using the impacted DNS provider. While Azure Active Directory (AAD) leverages multiple DNS providers, manual intervention was required to route a portion of AAD traffic to a secondary provider.

The external DNS service provider has resolved the root cause issue by addressing the data corruption issue on their primary servers. They have also added filters to catch this class of issue in the future, lowering the risk of reoccurrence.

Azure SQL engineers restored all SQL databases that were dropped as a result of this incident.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

• Azure Active Directory and other Azure service that currently leverage an external DNS provider are working to ensure that impacted services are on-boarded to our fully redundant, native to Azure, DNS hosting solution and exercising best practices around DNS usage. This will help to protect against a similar recurrence [in progress]. 
• Azure SQL DB has deployed code that will prevent SQL DB from being dropped when Azure Key Vault is not accessible [complete]
• Azure SQL DB code is under development to correctly distinguish between Azure Key Vault being unreachable from a key revoked [in progress]
• Azure SQL DB code is under development to introduce a database inaccessible state that will move a SQL DB into an inaccessible state instead of dropping in the event of a key revocation [in progress] 
• Azure Communications tooling infrastructure resiliency improvements have been deployed to production which will help ensure an expedited fail over in the event internal communications tooling infrastructure experiences a service interruption [complete]

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey

10/1

RCA - Storage and Dependent Services - UK South

Summary of impact: Between 13:19 UTC on 10 Jan 2019 and approximately 05:30 UTC on 11 Jan 2019, a subset of customers leveraging Storage in UK South may have experienced intermittent service availability issues. Other resources with dependencies on the affected Storage scale unit may have also experienced impact related to this event. 

Root cause and mitigation: At 13:15 UTC on 10 Jan 2019, a backend storage node crashed because of a hardware error. Over the next several minutes, the system attempted to balance this load to other servers. This triggered a rare race condition error in the process that served Azure Table traffic. The system then attempted to heal by shifting the Azure Table load to other servers, but the additional load on those servers caused a few more Azure Table servers to hit the rare race condition error. This in turn lead to more automatic load balancing and more occurrences of the race condition. By 13:19 UTC, most of the nodes serving the Azure Table traffic in the scale unit were recovering from the bug, and the additional load on the remaining servers began to impact other services, including disk, blob, and queue traffic.

Unfortunately in this incident, the unique nature of the problem, combined with the number of subsystems affected and the interactions between them, led to a longer than usual time to root cause and mitigate.

To recover the scale unit, the Azure team undertook a sequence of structured mitigation steps, including:

  • Removing the faulty hardware node from the impacted scale unit
  • Performing a configuration change to mitigate a software error
  • Reducing internal background processes in the scale unit to reduce load
  • Throttling and offloading traffic to allow the scale unit to gradually recover

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to): 

  • Fix for the race condition which is already deploying across all storage scale units (in progress)
  • Improve existing throttling mechanism to detect impact of load transfer on other nodes and rebalance to ensure overall stamp health (in progress)

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey: 

8/1

Azure DevTest Labs - Virtual Machine Error Message

Summary of impact: Between 16:30 UTC on 08 Jan 2019 and 00:52 UTC on 10 Jan 2019, a subset of customers using Azure DevTest Labs may have experienced issues creating Virtual Machines from Formulas in lab. Impacted customers may have also experienced incorrect error messages when connecting to Virtual Machines. 

Root cause: Engineers determined that an issue within a recent Azure Portal UI deployment task caused impact to customer resource error messaging and operations. 

Mitigation: Engineers deployed a platform hotfix in order to mitigate the issue for the majority of impacted customers. Remaining impacted customers will continue to receive communication updates via their Azure Management Portal:

4/1

Log Analytics - East US

Summary of impact: Between 09:30 and 18:00 UTC on 04 Jan 2019, a subset of customers using Log Analytics in East US may have received intermittent failure notifications and/or experienced latency when attempting to ingest and/or access data. Tiles and blades may have failed to load and display data. Customers may have experienced issues creating queries, log alerts, or metric alerts on logs. Additionally, customers may have experienced false positive alerts or missed alerts.

Preliminary root cause: Engineers determined that a core backend Log Analytics service responsible for processing customer data became unhealthy when a database on which it is dependent became unresponsive. Initial investigation shows that this database experienced unexpectedly high CPU utilization.

Mitigation: Engineers made a configuration change to this database which allowed it to scale automatically. The database is now responsive and the core backend Log Analytics service is healthy.

Next steps: Engineers will continue to investigate the reason for the high CPU utilization to establish the full root cause. Looking to stay informed on service health? Set up custom alerts here:

dicembre 2018

14/12

RCA - Networking - UK West

Summary of impact: Between 03:30 and 12:25 UTC on 14 Dec 2018, a subset of customers with resources in UK West may have intermittently experienced degraded performance, latency, network drops or time outs when accessing Azure resources hosted in this region. Customers with resources in UK West attempting to access resources outside of the region may have also experienced similar symptoms.

Root cause: Engineers determined that an external networking circuit experienced a hardware failure impacting a single fiber in this region. A single card on the optical system that services this fiber path had developed a fault, and this reduced the available network capacity, thus causing network issues for a subset of customers.

Mitigation: Azure Engineers initially re-directed internal Azure traffic to free up capacity for customer traffic, and this mitigated the issues being experienced by customers. Engineers subsequently performed a full hardware replacement which restored full connectivity across the fiber circuit, thus completely mitigating the issue. 

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

  • Augmenting critical networking paths to UK West with 400Gb of additional capacity. 50% of this work is already now complete, and the remaining work is scheduled to be completed by end of February 2019.
  • Updating our monitoring to ensure we are able to respond quicker to external networking issues, and thus reduce the impact time for customers.

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey

12/12

Azure Analysis Services - West Central US

Summary of impact: Between 21:55 UTC on 12 Dec 2018 and 09:55 UTC on 13 Dec 2018, a subset of customers using Azure Analysis Services in West Central US may have experienced issues accessing existing servers, provisioning new servers, resuming new servers, or performing SKU changes for active servers.

Preliminary root cause: Engineers determined that a recent update deployment task impacted a back-end Service Fabric instance which became unhealthy. This prevented requests to Azure Analysis servers from completing.

Mitigation: Engineers rolled back the recent deployment task to mitigate the issue.

Next steps: Engineers will review deployment procedures to prevent future occurrences. To stay informed on any issues, maintenance events, or advisories, create service health alerts () and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.

12/12

Log Analytics - East US

Summary of impact: Between 08:00 and 15:50 UTC on 12 Dec 2018, customers using Log Analytics in East US may have experienced delays in metrics data ingestion.

Preliminary root cause: Engineers determined that several service dependent web roles responsible for processing data became unhealthy, causing a data ingestion backlog.

Mitigation: Engineers manually rerouted data traffic to backup roles to mitigate the issue. 

Next steps: Engineers will continue to investigate to establish the full root cause for why the service instances became unhealthy. To stay informed on any issues, maintenance events, or advisories, create service health alerts () and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.

4/12

RCA - Networking - South Central US

Summary of impact: Between 02:48 UTC and 04:52 UTC on 04 Dec 2018, a subset of customers in South Central US may have experienced degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Applications and resources that retried connections or requests may have succeeded due to multiple redundant network devices and routes available within Azure datacenters.

Root cause: Two network devices in the South Central US region received an incorrect configuration during an automated process to update their configuration and firmware. As a result of the incorrect configuration, these routers were unable to hold all of the forwarding information they were expected to carry and dropped traffic to some destinations. The Azure network fabric failed to automatically remove these devices from service and continued to drop a small percentage of the network traffic passed through these devices.

For the duration of the incident, approximately 7% of the available network links in and out of the impacted datacenter were partially impacted.

The configuration deployed to the network devices caused a problem as it contained one setting incompatible with the devices in the South Central US Region. The deployment process failed to detect that the setting should not be applied to the devices in that region.

Mitigation: The impacted devices were identified and manually removed from service by Azure engineers, the network automatically recovered utilizing alternate network devices. The automated process for updating configuration and firmware detected that the devices had become unhealthy after the update and ceased updating any additional devices.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

- Additional monitoring to detect repeats of this issue (complete)
- Improvement in the system for validating the gold configuration file generated for each type of network device (in planning)
- Determine whether this class of conditions can be safely mitigated by automated configuration rollback, and if so add rollback to the error handling used by the configuration and firmware update service (in planning)

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey

novembre 2018

29/11

Azure Data Lake Store/Data Lake Analytics - North Europe

Summary of impact: Between 08:00 and 13:39 UTC on 29 Nov 2018, a subset of customers using Azure Data Lake Store and/or Data Lake Analytics may have experienced difficulties accessing resources hosted in this region. Data ingress or egress operations may have also timed-out or failed. Azure Data Lake Analytics customers may have experienced job failures. In addition, customers using other services with dependencies upon Azure Data Lake Store resources in this region - such as Databricks, Data Catalog, HDInsight and Data Factory - may also have experienced downstream impact. 

Preliminary root cause: A scheduled power maintenance in a datacenter in North Europe resulted in a small number of hardware instances becoming unhealthy. As result of this, Data Lake Store resources hosted on the impacted hardware became temporarily unavailable to customers.

Mitigation: The power maintenance task was cancelled and the impacted hardware was restarted. This returned the affected services hosted on this hardware to a healthy state, mitigating the impact for customers.

Next steps: Engineers will continue to investigate to establish the full root cause of why the power maintenance task failed and prevent future occurrences.

28/11

Storage - West US 2

Summary of impact: Between 04:20 and 12:30 UTC on 28 Nov 2018, a subset of Storage customers in West US 2 may have experienced difficulties connecting to resources hosted in this region. Customers using resources dependent on Storage may have also seen impact.

Preliminary root cause: Engineers determined that a recent deployment task introduced an incorrect backend authentication setting. As a result, some resources attempting to connect to storage endpoints in the region experienced failures.

Mitigation: Engineers developed an update designed to refresh the incorrect authentication setting. A staged deployment of this update was then applied to the impacted storage nodes to mitigate the issue. In a small number of cases, where application of the fix was not possible, impacted nodes were removed from active rotation for further examination.

Next steps: Engineers will review deployment procedures to prevent future occurrences and a full root cause analysis will be completed. To stay informed on any issues, maintenance events, or advisories, create service health alerts () and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.

27/11

RCA - Multi-Factor Authentication

Summary of impact:
At 14:20 UTC on November 27th, the Azure Multi-Factor Authentication (MFA) service experienced an outage that impacted cloud-based MFA customers worldwide. Service was partially restored at 14:40 UTC and fully restored at 17:39 UTC. Login scenarios requiring MFA using SMS, voice, phone app, or OATH tokens-based logins were blocked during that time. Password-based and Windows Hello-based logins were not impacted, nor were valid unexpired MFA sessions.

From 14:20 to 14:40 UTC, any user required to perform MFA using cloud-based Azure MFA was unable to complete the MFA process and so could not sign-in. These users were shown a browser page or client app that contained an error ID.

From 14:40 to 17:39 UTC, the problem was resolved for some users but continued for the majority. This subset of users continued to experience difficulties in authenticating via MFA. When experienced, the user would appear to begin the MFA authentication but never receive a code from the service.

This affected all Azure MFA methods (e.g. SMS, Phone, or Authenticator) and occurred whether MFA was triggered by Conditional Access policy or per-user MFA. Conditional access policies not requiring MFA were not impacted. Windows Hello authentication was not impacted.

Root cause and mitigation:
This outage was caused by a Domain Name System (DNS) failure which made the MFA service temporarily undiscoverable and a subsequent traffic surge resulting from the restoration of DNS service. Microsoft detected the DNS outage when it began at 14:20 UTC when engineers were notified by monitoring alerts. 
We sincerely apologize to our customers whose business depends on Azure Multi-Factor Authentication and were impacted by these two recent MFA incidents. Immediate and medium-term remediation steps are being taken to improve performance and significantly reduce the likelihood of future occurrence to customers across Azure, O365 and Dynamics.

As described above, there were two stages to the outage, related but with separate root causes.

  • The first root cause was an operational error that caused an entry to expire in the DNS system used internally in the MFA service. This expiration occurred at 14:20 UTC, and in turn caused our MFA front-end servers to be unable to communicate with the MFA back-end.
  • Once the DNS outage was resolved at 14:40 UTC, the resultant traffic patterns that were built up from the aforementioned issue caused contention and exhaustion of a resource in the MFA back-end that took an extended time to identify and mitigate. This second root cause was a previously unknown bug in the same component as the MFA incident that occurred on 19 of Nov 2018. This bug would cause the servers to freeze as they were processing the backlogged traffic.
  • To prevent this bug causing servers to freeze while a sustainable mitigation was being applied, engineers recycled servers.
  • Engineering teams continued add capacity to the MFA service to assist in alleviating the backlog.
  • Draining the resultant traffic back to normal levels took until 17:39 UTC at which point the incident was mitigated.


Next steps:
We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

  • Implementing the code fix to the MFA backend service to stop servers from freezing when processing high rate of backlogged requests. [COMPLETE]
  • During the last incident we increased capacity in the Europe region, capacity is now being scaled out to all regions for MFA. [COMPLETE]
  • Deploy improved throttling management system to manage traffic spikes. [COMPLETE]
  • All DNS changes will be moved to an automated system [IN PROGRESS]


Provide feedback:
Please help us improve the Azure customer communications experience by taking our survey -

    26/11

    RCA - Virtual Network - UK South and UK West

    Summary of impact: Between 11:19 and 16:56 UTC on 26 Nov 2018, a subset of customers using Virtual Networks in UK South and UK West may have experienced difficulties connecting to resources hosted in these regions. Some customers using Global VNET peering or replication between these regions may have experienced latency or connectivity issues. This issue may have also impacted connections to other Azure services.

    Root cause and mitigation: A 16 minute hardware failure on a WAN router caused a large amount of traffic to failover from the primary path out of the region to the backup path, causing increased congestion on the backup path. Once the hardware failure was resolved, a firmware issue with our WAN traffic engineering solution prevented traffic from returning to the primary path, causing the congestion issue to persist. Engineers manually rerouted network traffic back to the primary path to reduce the congestion and mitigate the issue. 

    Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future. In this case, this includes (but is not limited to):

    • Fix the firmware issue that prevented traffic from rerouting back to the primary path (completed)
    • Create alerts to detect similar issues with our traffic engineering solution in the future (completed)
    • Assess possible design changes to the WAN Traffic Engineering solution to maximize performance in extreme congestion scenarios (in progress)
    • Create alerts and monitoring process to detect large traffic flows that can be throttled to reduce congestion in this type of situation (completed)
    • Increase capacity in the region to avoid the potential for congestion (completed)

    Provide feedback: Please help us improve the Azure customer communications experience by taking our survey