Visar mobilvy Skrivbordsvy
Starting at 01:21 UTC on 6 May, 2016 customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in South Central US may experience longer than usual wait times for the builds to start processing. Engineers are currently migrating resources to a healthy scale unit to help lead towards mitigation. The next update will be provided in 2 hours, or as events warrant. Additional information can be found on http://aka.ms/vstsblog .
|Virtual Machine Scale Sets|
|Webb och mobilt|
|Data och lagring|
|Tjänsten Managed Cache|
|SQL Data Warehouse|
|SQL Server Stretch Database|
|Analyser med Datasjö|
|Power BI Embedded|
|Internet of Things|
|IoT-hubb i Azure|
|Media och CDN|
|Direktsänd strömning och strömning på begäran|
|Identitets- och åtkomsthantering|
|Azure Active Directory|
|Enterprise State Roaming|
|Azure Active Directory Domain Services|
|Azure Active Directory B2C|
|Access Control Service|
|Visual Studio Team Services|
|Build and Deployment/Build (XAML)|
|Visual Studio Application Insights|
|Labb för utveckling och testning i Azure|
|Klassisk Microsoft Azure-portal|
|Azure Resource Manager|
|Microsoft Azure Portal|
SUMMARY OF IMPACT: Between 02:34 and 04:20 UTC on 05 May, 2016 UTC a subset of customers using Cloud Services in South Central US may have experienced Service Management issues (Create and Manage). PRELIMINARY ROOT CAUSE: Engineers determined that a recent network configuration change caused the service management failures. MITIGATION: Engineers rolled back the configuration changes to restore system health. NEXT STEPS: Continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Our investigation of alerts for data catalog in Multiple Regions is now complete. Due to the extremely limited number of customers impacted by this issue, we are providing direct communication to those experiencing an issue via http://portal.azure.com and http://manage.windowsazure.com.
Between 00:10 and 02:00 UTC on 03 May 2016, customers using Visual Studio Team Services in North Central US may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: Engineers identified a caching issue during a planned maintenance activity. MITIGATION: Engineers recycled web and worker roles to clear the cache. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between 23:23 and 23:46 UTC on 02 May 2016, customers using Visual Studio Team Services in West Europe may have experienced issues when trying to log into their Visual Studio Team Services account. PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: Our systems self-healed and returned to a healthy state. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.
Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers using Storage, Virtual Machines, Web Apps, Remote App, Azure Search in West Europe may have experienced a service interruption due to an issue with our Storage. We have confirmed that normal service availability has been restored. More information is posted under Root Cause: Underlying Storage Issue In West Europe Impacting Virtual Machines and Web Apps in the Region on https://azure.microsoft.com/en-us/status/#history
SUMMARY OF IMPACT: Between at 19:00 to 21:35 UTC on 28 Apr, 2016, a subset of customers with services deployed in West Europe might have experienced issues accessing their resources in the region. An underlying Storage issue in West Europe resulted in subsequent impact to Virtual Machines, Web App, Remote App, and Azure Search in the region. While a majority of customers’ VMs have recovered, engineering has identified a very limited subset of customers that may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com). PRELIMINARY ROOT CAUSE: Engineers have identified a Storage software error as the preliminary root cause. MITIGATION: Engineers have deployed a hotfix and have confirmed that the issue is mitigated. NEXT STEPS: Engineering will continue to scan other Storage scale units to ensure that no other scale units contain the software error. Additionally, they will work to understand the root cause of the issue and will investigate further.
SUMMARY OF IMPACT: Between 17:03 and 21:15 UTC on 27 Apr 2016 customers using Visual Studio Team Services \ Load Testing in South India may have received an "Insecure Connection" error when attempting to access Cloud Load Testing services from their VSTS portal, or were unable to access Load Testing services from Visual Studio. PRELIMINARY ROOT CAUSE: Engineers identified a missing CName in DNS as the cause of the issue. MITIGATION: Engineers replaced the missing CName to mitigate the issue. NEXT STEPS: Continue to investigate the cause of the missing CName and take steps to prevent recurrences. Further information can be found here: http://aka.ms/vsoblog
SUMMARY OF IMPACT: From 11:08 to 18:00 UTC on 27 Apr 2016, a subset of customers using Data Factory in West US may have experienced errors when attempting to browse existing data factories in their Azure management portal (http://portal.azure.com). Customers may have also experienced intermittent delays in slice execution or failures in pipeline creation. PRELIMINARY ROOT CAUSE: A deployment in backend systems caused Virtual Machines to restart expectantly. These restarts were the cause of the aforementioned issues with the Data Factory service. MITIGATION: Engineers allowed the designed restarts to complete and worked to manually unblock failed services to allow automatic recovery of the system. NEXT STEPS: Engineering will continue to investigate why these expected restarts caused services to enter an unhealthy state and will work to prevent future recurrences of the issue.
SUMMARY OF IMPACT: Between 02:18 and 06:20 UTC on 23 Apr 2016 customers using Data Lake Store and Data Lake Analytics in East US 2 may have experienced service management operation failures (create, rename, update and delete). Interactions with the service through Azure Portal, APIs, PowerShell, and Visual Studio may have also failed. PRELIMINARY ROOT CAUSE: Engineers identified an underlying micro service failure to be the root cause. MITIGATION: Engineers were able to recover the affected micro service to mitigate the issue. NEXT STEPS: Engineers will continue to monitor the service and implement steps to prevent future occurrences
SUMMARY OF IMPACT: Between 23:24 UTC on 22 Apr 2016 and 01:00 UTC on 23 Apr 2016 a subset of customers using Azure Active Directory B2C in multiple regions may have experienced the inability to sign up or sign into Azure Active Directory B2C applications and saw application errors. During the incident, B2C tenant administrators were not able to administer B2C policies and applications on the Management Portal as well. PRELIMINARY ROOT CAUSE: Engineers identified a configuration error that impacted consumer and business applications. MITIGATION: Engineers issued a configuration change and a deployment to bring systems back to a healthy state and monitored for platform stability. NEXT STEPS: Investigate the underlying root cause to implement mitigation steps to prevent future occurrences.