Mobil görünümde görüntüleniyor Masaüstü görünüm
|Web ve Mobil|
|Veri ve Depolama|
|SQL Veritabanı Yönetim Portalı|
|SQL Veri Eşitleme|
|Yönetilen Önbellek Hizmeti|
|SQL Veri Ambarı|
|Veri Gölü Deposu|
|Veri Gölü Analizi|
|Uygulama Ağ Geçidi|
|VPN Ağ Geçidi|
|Medya ve CDN|
|Canlı ve İsteğe Bağlı Akış|
|Hizmet Veri Yolu|
|Kimlik ve Erişim Yönetimi|
|Azure Active Directory|
|Azure AD Etki Alanı Hizmetleri|
|Azure Active Directory B2C|
|Erişim Denetimi Hizmeti|
|Visual Studio Team Services|
|Build & Deployment/Build (XAML)|
|Visual Studio Application Insights|
|Azure DevTest Labs|
|Azure Yönetim portalı|
|Azure Kaynak Yöneticisi|
|Microsoft Azure önizleme portalı|
On 27 Nov, 2015 between 13:50 and 22:00 UTC a subset of customers using Visual Studio Team Services \ Build in East US 2 may have experienced long build queue times or occasional job failures. Queued jobs would eventually complete and need not be resubmitted. PRELIMINARY ROOT CAUSE: Virtual Machine re-imaging was experiencing higher than normal latency. The exact root cause of the latency is under investigation. Additional information can be found at the VSO Blog: http://aka.ms/vsoblog. MITIGATION: Additional resources have been added to the Virtual Machine pool to reduce latency. NEXT STEPS: Continue to investigate the root cause of the latency and prepare a fix to prevent it impacting customers again.
SUMMARY OF IMPACT: Between approximately 10:10 and 18:54 on 26 NOV 2015 UTC a subset of customers using Visual Studio Team Services \ Build in East US 2 may have experienced long build queue times or occasional job failures. Queued jobs would have eventually complete and need not be resubmitted. PRELIMINARY ROOT CAUSE: An Azure IaaS reimage took more time due to a load issue. MITIGATION: Our systems have self-healed and has returned to a healthy state. NEXT STEPS: Review the configuration process to prevent reoccurrences of this scenario.
SUMMARY OF IMPACT: Between 00:45 and 07:08 UTC on 26 Nov, 2015 a subset of customers using App Service \ Web App in West US may have experienced errors or timeouts. PRELIMINARY ROOT CAUSE: A Storage issue, which is now mitigated, was the original trigger for this issue. MITIGATION: WebApp self-healed after the storage issue was fully resolved. NEXT STEPS: Engineers will continue to monitor to ensure all services remain healthy.
SUMMARY OF IMPACT: Between 00:45 and 04:15 on 26 Nov 2015 UTC a subset of customers using Storage in West US experienced errors or timeouts when attempting to access services. PPRELIMINARY ROOT CAUSE: An unexpected growth in metadata resulted in a single Storage Scale Unit in West US becoming temporarily unavailable, and as a result any customers with services hosted on that scale unit were unable to access their resources. MITIGATION: Engineers deployed a hotfix to adjust metadata capacity which allowed the Scale Unit to come back online. NEXT STEPS: Investigate the underlying root cause of this issue and develop a solution to prevent reoccurrences, additional monitoring is now also in place.
SUMMARY OF IMPACT: Between 00:45 and approximately 05:30 on 26 Nov 2015 UTC a subset of customers using Key Vault, Logic App, Stream Analytics, Web App, Data Factory, Application Insights, API Management, HD Insight, Managed Cache, Mobile Services, Virtual Machines, and RemoteApp, encountered errors or timeouts when attempting to access services. In addition, customers may have been unable to create or view support tickets through http://portal.azure.com. PRELIMINARY ROOT CAUSE: A Storage issue was identified as the root cause. Additional information for that issue can be found on the Azure Status history page. MITIGATION: Mitigating the underlying Storage issue has restored service. NEXT STEPS: Continue to investigate the root cause of the Storage issue and look for opportunities to prevent a reoccurence. Any customers with Virtual Machines impacted by this issue, that have not recovered, will receive a notification within the Management Portal with instructions on how to restore health. If you were unable to create a support ticket due to this interruption, please retry your request through http://aka.ms/azsup
Engineers received monitoring alerts for Redis Cache in the West US sub-region. We have concluded our investigation of the alerts and confirmed that all services are healthy, and a service incident did not occur.
SUMMARY OF IMPACT: Between 12:58 and 16:50 on 25 Nov 2015 UTC a subset of customers using Virtual Machines in West Europe may have experienced issues starting VMs located in this region. Impact was limited to DS Series Virtual Machines only, and existing VMs were not impacted. PRELIMINARY ROOT CAUSE: A misconfiguration in resource allocation was causing D-Series Virtual Machines to fail to launch. MITIGATION: Engineers reconfigured the launch allocation, and this mitigated the issue. NEXT STEPS: Engineers continue to investigate to establish root-cause.
SUMMARY OF IMPACT: From 25 Nov, 2015 12:05 to 15:15 UTC a subset of customers using Visual Studio Team Services \ Build in East US 2 may have experienced a longer than usual build queue times, in addition a subset of these builds would have failed and customers would have had to resubmit their builds in order for them to complete. PRELIMINARY ROOT CAUSE: A Workflow Manager machine went into a hung state. This resulted in some, but not all builds being delayed or cancelled. MITIGATION: Engineering rebooted the Workflow Manager and it to come back into a responsive state. NEXT STEPS: Engineering will examine the underlying cause as to why the workflow manager went into an unresponsive state and develop a solution to prevent reoccurrences. Further information can be found at: http://aka.ms/vsoblog.
SUMMARY OF IMPACT: From 25 Nov, 2015 11:05 to 14:00 UTC a subset of customers using Visual Studio Team Services \ Build in West Europe may have experienced longer than usual build queue times. Cancelling and resubmitting jobs would have pushed them to the back to the queue. PRELIMINARY ROOT CAUSE: A recent change in the Azure infrastructure for Visual Studio Team Services \ Build resulted in some clusters in the region experiencing delays when processing some builds. MITIGATION: Azure Engineers have manually changed a configuration setting for the affected clusters which mitigated the delays. NEXT STEPS: Engineering will examine the remaining Visual Studio clusters to ensure that the correct configuration settings are applied to prevent reoccurrences. Further information can be found at: http://aka.ms/vsoblog.
SUMMARY OF IMPACT: Between 20:40 to 21:00 UTC on 24 November, 2015 a subset of customers may have experienced an error when attempting to log into their Management Portal through https://portal.azure.com. PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: The issue was self-healed. NEXT STEPS: Investigate the underlying root cause of this issue and develop a solution to prevent reoccurrences.