Azure high-performance computing (HPC) for silicon
Scalable, secure, on-demand, high-performance infrastructure with compute, storage and networking optimized for EDA workloads.
Customer-validated, production usage for hybrid (burst) and fully-on-Azure models, as well as phased, multi-year migration from on-premises to Azure.
Flexible, optimizable reference architecture that fits the needs of each tool/workload in the design flow.
Seamless deployment using industry-standard scheduling tools and storage standards.
Ecosystem partnerships to advance cloud-centric productivity gains by intelligent scaling and AI/ML.
Azure HPC silicon scenarios
Achieve fine-grained control over front- and back-end chip design flow, as well as block and full chip levels of physical hierarchy, by optimizing your chip design process with Azure technologies and architecture.
Dramatically improve turnaround time and business ROI for the highly scalable processes such as foundational and custom IP design, characterization, and quantification.
Improve yield and uptime by capturing and analyzing foundry data with Azure IoT, cognitive AI, machine learning, and mixed reality technologies. Enable fault prediction, perform preventive maintenance, automate machine tuning and processes, and optimize process flows, quality, and testing.
Silicon supply chain
Improve the design sales process and optimize your supply chain by integrating your business planning, inventory optimization, lifecycle management, component tracking, and logistics and distribution operations with cloud-native applications built on Azure IoT, machine learning, blockchain, and mixed reality technologies.
Contact your Microsoft account team for more information about Azure semiconductor solutions. Or, learn more in this white paper >
Powerful infrastructure as a service (IaaS) for the silicon industry
- Get the cloud infrastructure support you need to create, manage, operate, and optimize HPC and big compute clusters—at any scale and for any software stack—with Azure CycleCloud. And, get support for industry-standard job schedulers, such as Platform Load Sharing Facility (LSF).
- Take advantage of large volumes of I/O with sub-millisecond latency with Azure NetApp Files—delivered within the Azure data center with predictable, low latencies consistent with on-premises performance.
- Keep your data placement efficient, flexible, and cost-effective by deploying a hybrid infrastructure environment. Burst your EDA applications into Azure using data stored in on-premises NAS devices and scale to your needs using an intelligent cache with Azure HPC Cache.
"AMD is pleased to see that Mentor's Calibre nmDRC scales on cloud-based AMD Epyc-powered servers not just in traditional use models, but also on the Azure public cloud."Daniel Bounds, Senior Director of AMD Datacenter Product, AMD
Demonstrating the power of cloud computing, a 5nm test chip from TSMC took less than four hours to complete its verification as part of a joint project with Mentor, Microsoft Azure, and TSMC.
Cray in Azure provides the ability to realize complete on-premise capabilities with all the benefits of the cloud. With Microsoft, Cray brings the power of supercomputing to a much broader array of manufacturing organizations.
Get large amounts of I/O with sub-millisecond latency or large bandwidth for scale-up or scale-out environments for your cloud-native applications. With a consistent low-latency experience, regardless of region, Azure NetApp Files provides an on-premises NFS—and very soon SMB—protocol experience.