Design Principles Behind The Windows Azure Hypervisor
Our next few posts will be discussions on the components of the Windows Azure service. Please add comments on anything you would like to hear more about.
By Hoi Vo
We are frequently asked about the Windows Azure Hypervisor, and whether or not the code will be made available to customers as a product they could run in their own datacenters. We built the Windows Azure Hypervisor with three principles:
- Efficient: push work to hardware as much as possible. Any percentage gain once multiplied to tens of thousands of machines will be very significant for us. Consequently we can bet on new processor features to save CPU cycles for the hosted application.
- Small footprint: any features not applicable to our specific cloud scenarios are removed. This guarantees that we do not have to worry about updating or fixing unnecessary code, meaning less churning or required reboots for the host. All critical code paths are also highly optimized for our Windows Azure scenarios.
- Tight integration: The Windows Azure Hypervisor is tightly optimized with the Windows Azure kernel. This is required to achieve the level of scalability and performance we want for our stack.
Much of the development for the Windows Azure Hypervisor would only work in our environment, taking advantage of our specific homogenous data center environment. Some of the innovations would be useful to customers with a different data center design and will be incorporated in future releases of Hyper-V (e.g. Second-Level Address Translation will be available in Hyper-V v2.0).