Visualizzazione versione mobile Visualizzazione desktop
|Virtual Machine Scale Sets|
|Servizio contenitore di Azure|
|Web e dispositivi mobili|
|App per dispositivi mobili|
|App per le API|
|App per la logica|
|Dati e archiviazione|
|Sincronizzazione dati SQL|
|Servizio cache gestita|
|SQL Data Warehouse|
|Estensione database SQL Server|
|Archivio Data Lake|
|Analisi Data Lake|
|Power BI Embedded|
|Internet delle cose|
|Analisi di flusso|
|Hub di notifica|
|Hub IoT Azure|
|Servizio DNS di Azure|
|Bilanciamento del carico|
|Infrastruttura di rete|
|Supporti multimediali e CDN|
|Streaming live e on demand|
|Bus di servizio|
|Gestione dell'identità e dell'accesso|
|Azure Active Directory|
|Enterprise State Roaming|
|Servizi di dominio Azure Active Directory|
|Azure Active Directory B2C|
|Servizio di controllo di accesso|
|Servizi per gli sviluppatori|
|Visual Studio Team Services|
|Compilazione e distribuzione/compilazione (XAML)|
|Test di carico|
|Visual Studio Application Insights|
|Lab di sviluppo/test Azure|
|Portale di Microsoft Azure classico|
|Gestione risorse di Azure|
|Insieme di credenziali delle chiavi|
|Portale di Microsoft Azure|
|Utilità di pianificazione|
|Analisi dei log|
Between 00:10 and 02:00 UTC on 03 May 2016, customers using Visual Studio Team Services in North Central US may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: Engineers identified a caching issue during a planned maintenance activity. MITIGATION: Engineers recycled web and worker roles to clear the cache. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between 23:23 and 23:46 UTC on 02 May 2016, customers using Visual Studio Team Services in West Europe may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: Our systems self-healed and returned to a healthy state. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers using Storage, Virtual Machines, Web Apps, Remote App, Azure Search in West Europe may have experienced a service interruption due to an issue with our Storage. We have confirmed that normal service availability has been restored. More information is posted under Root Cause: Underlying Storage Issue In West Europe Impacting Virtual Machines and Web Apps in the Region on https://azure.microsoft.com/en-us/status/#history
SUMMARY OF IMPACT: Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers with services deployed in West Europe might have experienced issues accessing their resources in the region. An underlying Storage issue in West Europe resulted in subsequent impact to Virtual Machines, Web App, Remote App, and Azure Search in the region. While a majority of customers’ VMs have recovered, engineering has identified a very limited subset of customers that may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com). PRELIMINARY ROOT CAUSE: Engineers have identified a Storage software error as the preliminary root cause. MITIGATION: Engineers have deployed a hotfix and have confirmed that the issue is mitigated. NEXT STEPS: Engineering will continue to scan other Storage scale units to ensure that no other scale units contain the software error. Additionally, they will work to understand the root cause of the issue and will investigate further.
SUMMARY OF IMPACT: Between 17:03 and 21:15 UTC on 27 Apr 2016 customers using Visual Studio Team Services \ Load Testing in South India may have received an "Insecure Connection" error when attempting to access Cloud Load Testing services from their VSTS portal, or were unable to access Load Testing services from Visual Studio. PRELIMINARY ROOT CAUSE: Engineers identified a missing CName in DNS as the cause of the issue. MITIGATION: Engineers replaced the missing CName to mitigate the issue. NEXT STEPS: Continue to investigate the cause of the missing CName and take steps to prevent recurrences. Further information can be found here: http://aka.ms/vsoblog
SUMMARY OF IMPACT: From 11:08 to 18:00 UTC on 27 Apr 2016, a subset of customers using Data Factory in West US may have experienced errors when attempting to browse existing data factories in their Azure management portal (http://portal.azure.com). Customers may have also experienced intermittent delays in slice execution or failures in pipeline creation. PRELIMINARY ROOT CAUSE: A deployment in backend systems caused Virtual Machines to restart expectantly. These restarts were the cause of the aforementioned issues with the Data Factory service. MITIGATION: Engineers allowed the designed restarts to complete and worked to manually unblock failed services to allow automatic recovery of the system. NEXT STEPS: Engineering will continue to investigate why these expected restarts caused services to enter an unhealthy state and will work to prevent future recurrences of the issue.
SUMMARY OF IMPACT: Between 02:18 and 06:20 UTC on 23 Apr 2016 customers using Data Lake Store and Data Lake Analytics in East US 2 may have experienced service management operation failures (create, rename, update and delete). Interactions with the service through Azure Portal, APIs, PowerShell, and Visual Studio may have also failed. PRELIMINARY ROOT CAUSE: Engineers identified an underlying micro service failure to be the root cause. MITIGATION: Engineers were able to recover the affected micro service to mitigate the issue. NEXT STEPS: Engineers will continue to monitor the service and implement steps to prevent future occurrences
SUMMARY OF IMPACT: Between 23:24 UTC on 22 Apr 2016 and 01:00 UTC on 23 Apr 2016 a subset of customers using Azure Active Directory B2C in multiple regions may have experienced the inability to sign up or sign into Azure Active Directory B2C applications and saw application errors. During the incident, B2C tenant administrators were not able to administer B2C policies and applications on the Management Portal as well. PRELIMINARY ROOT CAUSE: Engineers identified a configuration error that impacted consumer and business applications. MITIGATION: Engineers issued a configuration change and a deployment to bring systems back to a healthy state and monitored for platform stability. NEXT STEPS: Investigate the underlying root cause to implement mitigation steps to prevent future occurrences.
SUMMARY OF IMPACT: Between at 17:15 to 18:10 UTC on 22 Apr, 2016, some customers using Virtual Machines in Australia East might have experienced failures connecting to their Virtual Machines or experienced unexpected reboots. Majority of customers VMs should be recovered now, and we also identified a very limited subset of customers may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com). PRELIMINARY ROOT CAUSE: at this stage, we do not have a definite root cause. However, our engineers had observed a broader internet impacting issue that is correlated to the timeframe of impact. NEXT STEPS: Our engineers continue to work on the underlying root cause.
SUMMARY OF IMPACT: Starting on 14:00 to 16:32 UTC on 22 Apr 2016, customers with Azure resources in the West US region may have experienced latency or packet drops when accessing their resources due to an underlying network infrastructure issue. We observed up to 15% network traffic loss during the incident. Other impacted services that may have observed impact were SQL Database and Azure Data Factory. PRELIMINARY ROOT CAUSE: Engineers have identified a faulty network line card, which resulted in packet drops for some flows. MITIGATION: Engineers took the router out of service. This mitigated the issue for impacted customers. NEXT STEPS: Network engineers are investigating the root cause of the issue why the line card went into an unhealthy state.