Mobil görünümde görüntüleniyor Masaüstü görünüm
|Sanal Makine Ölçek Kümeleri|
|Azure Kapsayıcı Hizmeti|
|Web ve Mobil|
|Veri ve Depolama|
|SQL Veri Eşitleme|
|Yönetilen Önbellek Hizmeti|
|SQL Veri Ambarı|
|SQL Server Esnetme Veritabanı|
|Veri Gölü Deposu|
|Veri Gölü Analizi|
|Power BI Embedded|
|Azure IoT Hub|
|Uygulama Ağ Geçidi|
|VPN Ağ Geçidi|
|Medya ve CDN|
|Canlı ve İsteğe Bağlı Akış|
|Hizmet Veri Yolu|
|Kimlik ve Erişim Yönetimi|
|Azure Active Directory|
|Enterprise State Roaming|
|Azure Active Directory Etki Alanı Hizmetleri|
|Azure Active Directory B2C|
|Erişim Denetimi Hizmeti|
|Visual Studio Team Services|
|Derleme ve Dağıtım/Derleme (XAML)|
|Visual Studio Application Insights|
|Azure Geliştirme ve Test Laboratuvarı|
|Klasik Microsoft Azure Portal|
|Azure Kaynak Yöneticisi|
|Microsoft Azure Portal|
SUMMARY OF IMPACT: Between 16:42 and 18:14 UTC on 25 May 2016, customers using Visual Studio Team Services in Multiple Regions may have experienced issues when signing into the service. Some customers may have experienced slow page load or Server 500 errors. Preliminary Root Cause: Engineers identified a backend process gradually increased loads since a service upgrade deployed last night. However, at this stage we do not have definite root cause of what increased the loads. Mitigation: Engineers applied mitigation steps to reduce the loads, and confirmed that all systems are back to normal. NEXT STEPS: Continue to investigate the underlying root cause of this issue and develop a solution to prevent occurrences.
SUMMARY OF IMPACT: Between 20:40 and 23:40 UTC on 24 May, 2016, a subset of customers using Storage in Japan East may have experienced intermittent latency or failures while accessing their storage resources. Additionally, some customers may have experienced failures or unexpected reboots of their Virtual Machines or Cloud Services. PRELIMINARY ROOT CAUSE: Engineers identified an underlying hardware issue that resulted in a small number of nodes to enter an unhealthy state. MITIGATION: Engineers have manually recovered the nodes to restore system health. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences
SUMMARY OF IMPACT: Between 18:16 and 21:20 UTC on 23 May 2016, a subset of customers using Visual Studio Team Services in South Central US may have experienced performance issues, 500, or 503 error messages when attempting to access their accounts. PRELIMINARY ROOT CAUSE: A backend database for a single scale unit in the region reached request limitations and caused the observed errors for some customers in the region. MITIGATION: The backend database was scaled up to increase request limitations for the affected scale unit. NEXT STEPS: Investigate the underlying root cause for the scale unit reaching request limitations on the database and develop a solution to prevent recurrences.
SUMMARY OF IMPACT: Between 20:30 UTC on 19 May, 2016 and 03:05 UTC on 20 May, 2016 customers using Power BI Embedded in South Central US may have experienced their custom visuals from the Visuals Gallery not loading. PRELIMINARY ROOT CAUSE: We identified a problematic CDN profile, but underlying root cause of why this profile went into degraded state is still under investigation. MITIGATION: We redeployed a new CDN profile. NEXT STEPS: Engineers will continue to investigate root cause.
SUMMARY OF IMPACT: Between 09:00 and 14:30 UTC on 18 May, 2016, customers using Visual Studio Team Services \ Build & Deployment/Build (XAML) in West Europe experienced longer than usual build queue times. PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: The issue was self-healed by the Azure platform. NEXT STEPS: Continue to investigate the underlying root cause of this issue and develop a solution to prevent recurrences.
SUMMARY OF IMPACT: Starting on 18:32 to 19:37 UTC on 11 May 2016, customers in Japan West region may have experienced intermittent connectivity issues when attempting to access Azure resources in the region due to an underlying network infrastructure issue. This networking issue also affected other Azure services in the region, including: SQL Database, Azure Stream Analytics, HDInsights, Virtual Machines, Storage, and App Service. PRELIMINARY ROOT CAUSE: The initial investigation points to a faulty network core router. Root cause is still under investigation. MITIGATION: The issue mitigate when engineers performed a reload of the impacted router. Connectivity recovered immediately and impacted services shortly thereafter. NEXT STEPS: Azure Network Engineers are performing a full root cause of this incident. The investigation is focused on identifying the defect in the router. Concurrently, the team is investigating why automated recovery did not prevent customer impact.
SUMMARY OF IMPACT: Between 18:15 UTC and 19:27 UTC on 11 May 2016, Visual Studio Team Services customers were unable to access their accounts. While the issue was intermittent, the level of impact was persistent and many customers that attempted retries experienced repeat failures. PRELIMINARY ROOT CAUSE: Engineers identified a backend process, which overloaded internal Visual Studio Team Services infrastructure and resulted in the observed impact. MITIGATION: The engineering team was able disable the offending service, which resolved the issue. NEXT STEPS: Engineers will examine the backend process’s logs and will work to understand the issue further. Additional information is available on the Visual Studio Team Services Blog here: http://aka.ms/vstsblog
SUMMARY OF IMPACT: Between approximately 17:00 and 18:40 UTC on 11th May, 2016, some customers with Web App deployments in South Central US experienced latency or timeouts on their Web Apps. PRELIMINARY ROOT CAUSE: At this time, we do not have preliminary root cause details to share. MITIGATION: Automatic Service Healing helped to resolve the latency issues for the impacted Web App deployments in the region. NEXT STEPS: The Web App engineering team will review this incident to better understand root cause and determine a method for preventing this scenario in the future.
SUMMARY OF IMPACT: Between 09:54 UTC and 13:00 UTC on 11 May 2016, customers using Power BI Embedded in South Central US may have experienced issues viewing their reports. Customers may also have received the error message: "This content is not available". PRELIMINARY ROOT CAUSE: Engineers identified a recent infrastructure deployment was the underlying error. MITIGATION: Power BI engineers have reverted the change that introduced the issue. NEXT STEPS: Engineers will continue to investigate to establish root cause, and prevent future occurrences.
SUMMARY OF IMPACT: Between 21:40 UTC on 06 May 2016 and 02:54 UTC on 07 May 2016 a subset of customers using Virtual Machines in North Europe may have encountered intermittent errors when attempting to perform service management functions using the Compute Resource Provider. PRELIMINARY ROOT CAUSE: This issue was a side effect from an earlier Virtual Machine and Cloud Service incident within the region. MITIGATION: Service partially self-healed and engineers were able to manually recover remaining resources. NEXT STEPS: Engineers will continue to investigate the underlying root cause and develop steps to prevent future occurrences.