• <1 minute

More GPUs, more power, more intelligence

Last year we introduced our first GPU offering powered by NVIDIA’s Tesla-based GPUs and we have seen an amazing customer response. With the Azure NC-series, you can run CUDA workloads on up to four…

Last year we introduced our first GPU offering powered by NVIDIA’s Tesla-based GPUs and we have seen an amazing customer response. With the Azure NC-series, you can run CUDA workloads on up to four Tesla K80 GPUs in a single virtual machine. Additionally, unlike any other cloud provider, the NC-series offers RDMA and InfiniBand connectivity for extremely low-latency, high throughput, and scale-out workloads. We want to enable your workloads to scale-up and to scale-out.

Given these GPU powerhouses, one of the fastest growing workloads we have seen on Azure are AI and Deep Learning. This includes image recognition, speech training, natural language processing, and even pedestrian detection for autonomous vehicles. Building on these learning possibilities, I am excited to announce that we will be expanding our GPU-based offerings on Azure with the new ND-series. This new series, powered by NVIDIA Tesla P40 GPUs based on the new Pascal Architecture, is excellent for training and inference. These instances provide over 2x the performance over the previous generation for FP32 (single precision floating point operations), for AI workloads utilizing CNTK, TensorFlow, Caffe, and other frameworks. The ND-series also offers a much larger GPU memory size (24GB), enabling customers to fit much larger neural net models. Finally, like our NC-series, the ND-series will offer RDMA and InfiniBand connectivity so you can run large-scale training jobs spanning hundreds of GPUs.

Here is a table describing these new sizes:

Size CPU’s GPU Memory Networking
ND6s 6 1 P40 112 GB Azure Network
ND12s 12 2 P40 224 GB Azure Network
ND24s 24 4 P40 448 GB Azure Network
ND24rs 24 4 P40 448 GB

InfiniBand

In addition to AI and Deep Learning workloads, your traditional HPC workloads can also benefit from a performance boost, powering scenarios like reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, rendering, and others. One of the promises of the cloud has always been agility. As your computational needs change and expand/shrink and as the models improve/mature, you want to leverage the latest and greatest hardware for computation without waiting for existing hardware to age. With Azure, this will now become possible.

I am excited to announce plans to release the next generation of our NC-series, the NCv2, powered by NVIDIA Tesla P100 GPUs. These new GPUs provide more than 2x the computational performance of our current NC-series. We will also offer InfiniBand networking for workloads that require fast interconnect, like Oil & Gas, Automotive, and Genomics to also accelerate scale out capability as well as improved single instance performance.

Size CPU’s GPU Memory Networking
NC6s_v2 6 1 P100 112 GB Azure Network
NC12s_v2 12 2 P100 224 GB Azure Network
NC24s_v2 24 4 P100 448 GB Azure Network
NC24rs_v2 24 4 P100 448 GB

InfiniBand

“With these new offerings, Microsoft is bringing the benefits of the Pascal architecture to thousands of enterprises eager to transform their businesses with the power of deep learning and high performance computing,” said Ian Buck, General Manager of Accelerated Computing at NVIDIA. “The demand for accelerated computing with GPUs has never been higher, and we are thrilled to be working with Microsoft to provide the leading edge computing platform for Azure.”

I am certain with these upcoming new sizes that you can deploy our cutting edge virtual machines for your accelerated workloads.

The above new sizes will be available later in the year. To sign up for the preview, please visit the sign-up page.