Skip to main content

 Subscribe

Today, we’re announcing support for two new compute-intensive sizes: A10 and A11. These new instances are available immediately for both Virtual Machines and Cloud Services. You can deploy them through our Management Portal, PowerShell and Management APIs.

Our first set of compute-intensive sizes, A8 and A9, introduced remote direct memory access (RDMA) technology for maximum efficiency of parallel MPI applications. We have heard from many customers that the high performance processing capabilities of the A8 and A9 instances are important for the workloads that they run on the cloud, but also that these workloads are not always suited to take advantage of the RDMA backend network that is available to the A8 and A9 instances.

For that reason, we have created a compute intensive alternative that does not include the RDMA backend network, and that is offered at a lower price. Having this alternative enables you to select the size and network capabilities that best match the workload that you need to run on Azure.

Same performance, better price

The A10 and A11 compute intensive instances have the same performance optimizations as the A8 and A9, as well as the same specifications:

Instance Size Cores CPU type RAM  RAM type
A10 8 Intel® Xeon® E5-2670 @ 2.6 GHz 56 GB DDR3-1600 MHz
A11 16 Intel® Xeon® E5-2670 @ 2.6 GHz 112 GB DDR3-1600 MHz

The only technical difference between the original compute intensive instances and these new instances, as mentioned earlier in this post, is that A8 and A9 include a second network adapter that is connected to an RDMA backend network. This backend network enables low latency, high throughput communication between instances within a single cloud service, as is required by some high performance computing (HPC) workloads.

The A10 and A11 instances are being offered at a lower price than the A8 and A9 instances. For details about pricing, see Cloud Services Pricing Details and Virtual Machines Pricing Details.

Find the best size for your workload

The backend RDMA network that is available to the A8 and A9 instances makes them suitable for HPC workloads that run parallel Message Passing Interface (MPI) applications, which require low latency and high throughput communication between instances. Examples of the types of workloads that would perform best on A8 and A9 instances include: computational fluid dynamics, crash simulation, reservoir simulation, and weather forecasting, among others.

The A10 and A11 instances are designed for HPC workloads that do not require a tight interaction between instances, also known as parametric or embarrassingly parallel workloads. Examples of these types of workloads are: financial risk analysis, image and movie rendering, and genome research, among others. A10 and A11 instances can also be well suited for running single-node engineering analyses (for example, on the 16 cores that are available on a single A11 instance), or for running parametric sweeps across an engineering space.

Regional availability

The A10 and A11 instances are currently available in all regions where A8 and A9 are already available:

United States

  • East US
  • West US
  • South Central US
  • North Central US

Europe

  • North Europe
  • West Europe

Asia

  • Japan East

Learn more

For more information about the different types of compute intensive instances that are available to you, see About the A8 and, A9, A10, and A11 Compute Intensive Instances. Also, you can always reach out to us directly. We are eager to know about your experience doing compute intensive work on Azure. Just send us an email with any feedback or suggestions, or add your comments to this post.

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning