Mobiele weergave is actief Desktopweergave
|A0 - A7|
|A0 - A7|
|Infrastructuur van service|
|Schaalsets voor virtuele machines|
|Managed Cache Service|
|Internet en mobiel|
|Live en on-demand streamen|
|SQL Data Sync|
|SQL Data Warehouse|
|SQL Server Stretch Database|
|Intelligence + Analytics|
|Power BI Embedded|
|Data Lake Store|
|Data Lake Analytics|
|Academic Knowledge API|
|Automatische suggestie-API voor Bing|
|Language Understanding Intelligent Service|
|Spraak-API voor Bing|
|Bing Spell Check-API|
|Internet of Things|
|Azure IoT Hub|
|Security + Identity|
|Azure Active Directory|
|Enterprise State Roaming|
|Azure Active Directory Domain Services|
|Azure Active Directory B2C|
|Access Control Service|
|Visual Studio Team Services|
|Build en implementatie/Build (XAML)|
|Visual Studio Application Insights|
|Azure DevTest Labs|
|Monitoring + Management|
|Microsoft Azure Portal|
|Klassieke Microsoft Azure Portal|
|Bewaking en waarschuwingen|
|Azure Resource Manager|
De Australische regio's zijn beschikbaar voor klanten met bedrijfsaanwezigheid in Australië en Nieuw-Zeeland.
De Indiase regio's zijn beschikbaar voor klanten met volumelicentie en partners met lokale inschrijving in India. De Indiase regio's worden in 2016 opengesteld voor directe online Azure-abonnementen.
Summary of impact: Between 16:49 UTC on 26 Aug and 01:30 UTC on 27 Aug, 2016, customers using StorSimple in Multiple Regions may have experienced latency when attempting to load their usage monitoring graphs. Customers registering new devices into StorSimple may have also experienced difficulty viewing usage monitoring graphs for these devices.
Preliminary root cause: Engineers identified an infrastructure resource that stopped serving requests.
Mitigation: Engineers replaced the unhealthy resource to allow new requests to process.
Next action: Engineers will continue to investigate to determine root cause in order to prevent future occurrences.
Summary of impact: Between 07:00 and 23:27 UTC on 25 Aug 2016, a subset of customers authenticating Microsoft Accounts using the PowerShell API may have seen intermittent failures or timeouts. Additional Azure services using this authentication workflow other than RemoteApp were potentially impacted.
Mitigation: Engineers deployed a hotfix and validated that logins were able to succeed.
Next steps: Investigate the cause of the software error and create mitigation steps to prevent future occurrences.
Summary of impact: Between 20:32 UTC on 22 Aug 2016 and 00:18 UTC on 23 Aug 2016, a subset of customers using App Service \ Web Apps in North Europe, South Central US and West US may have experienced timeouts or HTTP 503 errors when trying to access their Web App deployments.
Preliminary root cause: Engineers have identified a recent maintenance activity as the preliminary root cause.
Mitigation: Engineers manually performed additional steps to reduce the impact time and return assets to a fully functioning state.
Next steps: Engineers will continue to investigate the root cause to prevent such issues in future maintenance operations.
Summary of impact: Between at 16:40 to 19:50 UTC on 22 Aug, 2016, a subset of customers may have experienced 500 errors when attempting to access their Storage resources hosted in East US. An underlying Storage issue in East US resulted in subsequent impact to Azure Resource Manager, Azure Portal, Key Vault, Virtual Machines, DocumentDB. Customers might have experienced issues when attempting to access their resources in the Azure Portal in multiple regions. A subset of customers might have encountered internal server errors when attempting to perform actions with Azure Resource Management within the Management Portal. A subset of customers might have experienced errors when attempting to access their Virtual Machines resources hosted in the region. Customers using DocumentDB in multiple regions might have been unable to execute management operations from the portal, powershell, or programmatically. Customer using Key Vault in the region might have been unable to retrieve their keys. While a majority of customers’ VMs have recovered, engineering has identified a very limited subset of customers that may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com).
Preliminary root cause: Engineers have identified a recent deployment task as the preliminary root cause.
Mitigation: Engineers have rolled back the recent deployment task which mitigated the issue.
Next steps: Engineers will continue to investigate the underlying cause and develop steps to prevent future occurrences.
Starting at approximately 01:45 UTC on 18 Aug 2016, customers using SQL Database in Multiple Regions may experience issues performing service management operations. Also, customers using HDInsight in Multiple Regions may see service management operation failures (create). Retrieving information about SQL servers and databases through the Azure Management Portal may result in an error or timeout. Server and Database create, drop, rename and change edition or performance tier operations may also not complete successfully. Availability (connecting to and using existing databases) is not impacted. Engineers have identified a potential underlying root cause and have begun implementing mitigation steps. Some customers may begin to see success as engineers are seeing mitigation in several regions. The next update will be provided in 60 minutes or as events warrant.
Summary of impact: Between 03:11 and 04:20 UTC on 18 Aug 2016, customers using Visual Studio Team Services in Australia East may have experienced intermittent failures when accessing their accounts.
Preliminary root cause: Root cause is still under investigation at this time.
Mitigation: Engineers redeployed database connections which mitigated the issue.
Next steps: Continue to investigate the underlying root cause of this issue and develop a solution to prevent recurrences. More information at http://aka.ms/VSTSBlog.
Summary of impact: Between 21:00 UTC on 17 Aug 2016 and 1:08 UTC on 18 Aug 2016, customers using HDInsight in multiple regions may have experienced issues with Linux cluster creation. Existing clusters were not affected.
Preliminary root cause: Root cause is still under investigation at this time.
Mitigation: Engineers scaled out backend resources which mitigated the issue.
Next steps: Engineers will investigate the underlying cause and create mitigation steps to prevent future occurrences.
Summary of impact: Between 12:45 and 21:00 UTC on 17 Aug 2016, customers using Log Analytics in West Europe may have experienced search query and alert failures, and the inability to load search data. Existing and ingested data was not impacted.
Preliminary root cause: Engineers identified a software bug on supporting backend nodes that impacted API calls.
Mitigation: Engineers issued a deployment to correct the software bug and monitored to ensure stability.
Next steps: Investigate the underlying cause and create mitigation steps to prevent future occurrences.
Summary of impact: Between 06:47 and 09:58 UTC on 15 Aug 2016, a subset of customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in North Central US, South Central US, West Europe and North Europe may have experienced longer than usual build queue times. Customers were advised not to cancel any builds and attempt to resubmit these as cancelling the job and resubmitting would push the job to the back of the queue. More information at http://aka.ms/VSTSBlog.
Preliminary root cause: At this stage we do not have a definitive root cause.
Mitigation: This issue was self-healed by the Azure platform.
Next steps: Engineers will continue to investigate the underlying root cause of this issue and develop a solution to prevent recurrences.
Summary of impact: Between 06:09 and 09:10 UTC on 15 Aug 2016 customers using the Microsoft Azure portal or APIs in multiple regions may have experienced timeouts or failures when executing Service Management requests or operations. Existing deployed resources were not affected by this issue. Engineers identified high CPU utilization on a subset of Azure Front End nodes which was causing some requests to timeout or fail. Engineers distributed traffic across additional resources, and manually injected QoS mechanisms were utilized to mitigate the issue.
Customer impact: Customers would have experienced increased latencies and timeouts for Service Management requests for the services impacted above. Existing deployed resources were not affected by this issue.
Root cause and mitigation: Due to a sharp increase in the Application Program Interface (API) traffic from another internal Azure service, the Azure Front End services were running at very high CPU and were intermittently unable to process incoming API requests. The robust Quality of Service (QoS) mechanisms protecting this API automatically identified and mitigated a very high percentage of these requests, but because of the quantity and specific pattern of requests that were received, even the small percentage that were passed to the Front Ends were enough to trigger this performance issue. Requests were distributed across additional resources to handle the load, and steps were taken to manually inject rules to protect QoS until the origin of the requests could be remediated.
Next steps: We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future, and in this case it includes (but is not limited to): (1) Remediate origin of API request volume – complete, (2) Improve automated QoS design to prevent this and similar scenarios from recurring.