跳过导航

Supercomputing performance in the cloud

该资源提供 English 版。

发布时间:2021/7/9

The use of high-performance computing (HPC) platforms to power use cases such as AI, machine learning and deep learning continues to push the bounds of available infrastructure performance. As data scientists and technologists develop increasingly complex and demanding models, access to supercomputer-class infrastructure determines how quickly these applications can deliver insights and business value. However, building infrastructure that can deliver TOP500 (a biannual survey of the highest performant supercomputers) level performance is costly and takes time to test and deploy. Utilizing the Azure cloud, customers are able to access supercomputer type systems without having to physically build out the system and infrastructure. This also allows them access to newer technologies when they become available without being locked to a physical cluster for the life of that cluster (3-5+ years typically). As a result, there is an increasing desire to make use of public cloud services that can provide infrastructure for demanding HPC use cases as it is needed. The real growth in public cloud usage will be driven by supercomputer-class cloud HPC instances or virtual machines, and by access to the most up-to-date libraries and software that support AI, deep learning, and machine learning workloads. And this demand will come from companies of all sizes. A recent Forrester study found that 39% of small and medium-sized firms plan on running HPC and AI workloads on public cloud services.