• 1 min read

Design Principles Behind The Windows Azure Hypervisor

Our next few posts will be discussions on the components of the Windows Azure service.  Please add comments on anything you would like to hear more about. By Hoi VoDirector We are frequently…

Our next few posts will be discussions on the components of the Windows Azure service.  Please add comments on anything you would like to hear more about.

By Hoi Vo
Director

We are frequently asked about the Windows Azure Hypervisor, and whether or not the code will be made available to customers as a product they could run in their own datacenters.  We built the Windows Azure Hypervisor with three principles:

  1. Efficient: push work to hardware as much as possible.  Any percentage gain once multiplied to tens of thousands of machines will be very significant for us.  Consequently we can bet on new processor features to save CPU cycles for the hosted application.
  2. Small footprint: any features not applicable to our specific cloud scenarios are removed.  This guarantees that we do not have to worry about updating or fixing unnecessary code, meaning less churning or required reboots for the host. All critical code paths are also highly optimized for our Windows Azure scenarios.
  3. Tight integration: The Windows Azure Hypervisor is tightly optimized with the Windows Azure kernel.  This is required to achieve the level of scalability and performance we want for our stack. 

Much of the development for the Windows Azure Hypervisor would only work in our environment, taking advantage of our specific homogenous data center environment. Some of the innovations would be useful to customers with a different data center design and will be incorporated in future releases of Hyper-V (e.g. Second-Level Address Translation will be available in Hyper-V v2.0).