• <1 minute

Batch computing at a fraction of the price

Today at Microsoft Build 2017, we are delighted to announce the public preview of a new way to obtain and consume Azure compute at a much lower price using Azure Batch – low-priority VMs. Low-priority VMs are allocated from our surplus compute capacity and are available for up to an 80% discount, enabling certain types of workloads to run for a significantly reduced cost or allowing you to do much more for the same cost.

Today at Microsoft Build 2017, we are delighted to announce the public preview of a new way to obtain and consume Azure compute at a much lower price using Azure Batch – low-priority VMs. Low-priority VMs are allocated from our surplus compute capacity and are available for up to an 80% discount, enabling certain types of workloads to run for a significantly reduced cost or allowing you to do much more for the same cost.

What are low-priority VMs?

We are giving you access to spare capacity that can exist in each region, for a significantly reduced price. The amount of spare capacity is going to vary by region and VM size according to multiple factors, including day of week, time of day, and demand for different VM sizes. We effectively let you “borrow” and take advantage of this unused capacity for a great price. However, it comes with the understanding that when you request it, there may not be some or all the capacity available, plus for capacity you have allocated, there are occasions when we’ll need to take some or all of it back. Hence the name – low-priority VMs may not be allocated or may be preempted due to higher priority allocations, which equate to full-priced VMs that have an SLA.

The price for low-priority VMs is fixed, with each VM size now having a fixed low-priority price in addition to the existing full price. See the Azure Batch pricing page for more details.

How can I use low-priority VMs?

Huge cost savings are possible, but low-priority VMs are not suitable for all workloads given there are times when they are not available or get preempted. Batch processing jobs are one of the main types of workload that can leverage low-priority VMs and why we have made them available through Azure Batch.

Batch processing jobs consist of one or more discrete tasks, normally run using multiple VMs. Jobs can therefore be tolerant of interruptions and may have flexibility in how long they take to run. Jobs that can tolerate interrupted tasks, have tasks that execute is a shorter time, and have flexibility in job execution time are best suited to take advantage of low-priority VMs.

Azure Batch has first-class support for low-priority VMs, with the goal to make them easier to consume – handling any interrupted tasks and allowing you to balance job execution time with job cost. For example, Azure Batch pools can contain both normal on-demand VMs and low-priority VMs, with it being possible to rebalance the number of VMs of each type at any time.

Virtually all workloads that can use Azure Batch can take advantage of low-priority VMs, for example:

  • Media processing and transcoding. Some users may need their output produced in a fixed time, which means the required amount of capacity to process the job in the required time must be available. Other users may have flexibility and can cater for their output potentially taking longer to produce, in which case their jobs could be run using low-priority VMs and they could pay less for their jobs.
  • Rendering. Jobs are split into many tasks, with individual frames or even tiles of frames, being able to be executed in parallel. Jobs may consist of 10’s or 100’s of thousands of tasks.
  • Testing. A lot of testing requiring large scale and has some flexibility in when it completes. Particularly well suited is large-scale regression and load testing.

An example of an Azure Batch customer who has been using low-priority VMs is Combinostics. Combinostics is a young startup whose cNeuro suite of tools processes brain images, providing clinical decision support for neurological disorders. Their initial cMRI module processes brain MRIs. Processing MRIs in production requires considerable processing, but validating updated algorithms requires a large amount of regression testing, especially given they are providing a medical service. Low-priority VMs will allow Combinostics to significantly lower their costs associated with large-scale testing, which is a huge benefit.

“For a young startup, time and money are the most precious resources. Using cloud infrastructure allows us to spend less time setting up and managing the running of image processing algorithms and more time creating value for our customers. The reduced cost of low-priority VMs will save us money, but it also makes us much more likely to run against the full test set more often. Being able to comprehensively test the algorithms often reduces the development effort, as issues are discovered earlier.”

– Jussi Mattila: Head of Research and Development, Combinostics.

Azure Batch features

Low-priority VMs can be easily used with Azure Batch, allowing them to be used in conjunction with normal on-demand VMs and enabling job cost to be balanced with job execution flexibility.

  • Batch pools can contain both on-demand nodes and low-priority nodes. The two types can be independently scaled, either explicitly with the resize operation or automatically using auto-scale. Different configurations can be used, such as maximizing cost savings by always using low-priority nodes or spinning up on-demand nodes at full price, to maintain capacity by replacing any preempted low-priority nodes.
  • If any low-priority nodes are preempted, then Batch will automatically attempt to replace the lost capacity, continually seeking to maintain the target amount of low-priority capacity in the pool.
  • If tasks are interrupted when the node on which it is running is preempted, then the tasks are automatically re-queued to be re-run.

More information

 

We’re excited to see how you put Azure Batch low-priority VMs to use, in addition to receiving feedback on how we can further improve this offering!