Displaying mobile view Desktop view
Starting at 6 May, 2016 07:25 UTC customers using Virtual Machines in North Europe may experience issues with Service Management Operations ( including create, resizing and scaling out). Engineers are investigating the root cause. The next update will be provided in 60 minutes.
|Virtual Machine Scale Sets|
|Azure Container Service|
|Web & Mobile|
|Data & Storage|
|SQL Data Sync|
|Managed Cache Service|
|SQL Data Warehouse|
|SQL Server Stretch Database|
|Data Lake Store|
|Data Lake Analytics|
|Power BI Embedded|
|Internet of Things|
|Azure IoT Hub|
|Media & CDN|
|Live and On-Demand Streaming|
|Identity & Access Management|
|Azure Active Directory|
|Enterprise State Roaming|
|Azure Active Directory Domain Services|
|Azure Active Directory B2C|
|Access Control Service|
|Visual Studio Team Services|
|Build & Deployment/Build (XAML)|
|Visual Studio Application Insights|
|Azure DevTest Labs|
|Microsoft Azure classic portal|
|Azure Resource Manager|
|Microsoft Azure portal|
SUMMARY OF IMPACT: Between 01:21 UTC and 07:54 UTC on 6 May 2016 customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in South Central US may have experienced longer than usual wait times for the builds to start processing. PRELIMINARY ROOT CAUSE: Engineers identified a resourcing configuration issue which delayed some Build jobs from being processed in a timely manner. MITIGATION: Engineers manually changed the configuration to allocate the appropriate resources to the pending Build jobs. NEXT STEPS: Build jobs longer in queue than 60 minutes may have canceled and need to be resubmitted by customers to be restarted. Engineers will continue to investigate resource allocation configurations to prevent future delays in processing Build jobs.
SUMMARY OF IMPACT: Between 02:34 and 04:20 UTC on 05 May, 2016 UTC a subset of customers using Cloud Services in South Central US may have experienced Service Management issues (Create and Manage). PRELIMINARY ROOT CAUSE: Engineers determined that a recent network configuration change caused the service management failures. MITIGATION: Engineers rolled back the configuration changes to restore system health. NEXT STEPS: Continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Our investigation of alerts for data catalog in Multiple Regions is now complete. Due to the extremely limited number of customers impacted by this issue, we are providing direct communication to those experiencing an issue via http://portal.azure.com and http://manage.windowsazure.com.
Between 00:10 and 02:00 UTC on 03 May 2016, customers using Visual Studio Team Services in North Central US may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: Engineers identified a caching issue during a planned maintenance activity. MITIGATION: Engineers recycled web and worker roles to clear the cache. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between 23:23 and 23:46 UTC on 02 May 2016, customers using Visual Studio Team Services in West Europe may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: Our systems self-healed and returned to a healthy state. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers using Storage, Virtual Machines, Web Apps, Remote App, Azure Search in West Europe may have experienced a service interruption due to an issue with our Storage. We have confirmed that normal service availability has been restored. More information is posted under Root Cause: Underlying Storage Issue In West Europe Impacting Virtual Machines and Web Apps in the Region on https://azure.microsoft.com/en-us/status/#history
SUMMARY OF IMPACT: Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers with services deployed in West Europe might have experienced issues accessing their resources in the region. An underlying Storage issue in West Europe resulted in subsequent impact to Virtual Machines, Web App, Remote App, and Azure Search in the region. While a majority of customers’ VMs have recovered, engineering has identified a very limited subset of customers that may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com). PRELIMINARY ROOT CAUSE: Engineers have identified a Storage software error as the preliminary root cause. MITIGATION: Engineers have deployed a hotfix and have confirmed that the issue is mitigated. NEXT STEPS: Engineering will continue to scan other Storage scale units to ensure that no other scale units contain the software error. Additionally, they will work to understand the root cause of the issue and will investigate further.
SUMMARY OF IMPACT: Between 17:03 and 21:15 UTC on 27 Apr 2016 customers using Visual Studio Team Services \ Load Testing in South India may have received an "Insecure Connection" error when attempting to access Cloud Load Testing services from their VSTS portal, or were unable to access Load Testing services from Visual Studio. PRELIMINARY ROOT CAUSE: Engineers identified a missing CName in DNS as the cause of the issue. MITIGATION: Engineers replaced the missing CName to mitigate the issue. NEXT STEPS: Continue to investigate the cause of the missing CName and take steps to prevent recurrences. Further information can be found here: http://aka.ms/vsoblog
SUMMARY OF IMPACT: From 11:08 to 18:00 UTC on 27 Apr 2016, a subset of customers using Data Factory in West US may have experienced errors when attempting to browse existing data factories in their Azure management portal (http://portal.azure.com). Customers may have also experienced intermittent delays in slice execution or failures in pipeline creation. PRELIMINARY ROOT CAUSE: A deployment in backend systems caused Virtual Machines to restart expectantly. These restarts were the cause of the aforementioned issues with the Data Factory service. MITIGATION: Engineers allowed the designed restarts to complete and worked to manually unblock failed services to allow automatic recovery of the system. NEXT STEPS: Engineering will continue to investigate why these expected restarts caused services to enter an unhealthy state and will work to prevent future recurrences of the issue.
SUMMARY OF IMPACT: Between 02:18 and 06:20 UTC on 23 Apr 2016 customers using Data Lake Store and Data Lake Analytics in East US 2 may have experienced service management operation failures (create, rename, update and delete). Interactions with the service through Azure Portal, APIs, PowerShell, and Visual Studio may have also failed. PRELIMINARY ROOT CAUSE: Engineers identified an underlying micro service failure to be the root cause. MITIGATION: Engineers were able to recover the affected micro service to mitigate the issue. NEXT STEPS: Engineers will continue to monitor the service and implement steps to prevent future occurrences