メイン コンテンツにスキップ

 Subscribe

Continuing our journey to make Azure Backup enterprise grade, we previously announced support for protecting volumes up to 54TB on file servers and have seen very good adoption. We’ve seen cloud backup of very big file server volumes with large numbers of files – the largest so far with over 13 million files!

We also announced broad availability of Azure Backup server to protect business critical workloads such as SQL, SharePoint, Exchange and VMs with Azure Backup. Today, we are releasing a slew of features to further optimize cloud backups for enterprise data.

Faster backups

Azure Backup now leverages USN journal (Update Sequence Number Journal) technology in Windows to track files that have changed between consecutive backups. USN keeps track of changes to files and directories on the volume that helps in identifying changed files quickly, enabling faster backups. We’ve seen up to a 50% reduction of backup times in volumes with two million files utilizing this optimization. Please note, individual file server backup times will vary depending on number of files, size of each file and directory structure.

Reduced cache space

Previously, Azure Backup required cache space of 15% of volume size being backed up to Azure. While this was generally fine for smaller volumes, it became prohibitive with volumes greater than 10TB. This cache space primarily consists of a metadata VHD which is used for cataloguing the files being backed up. We introduced a new algorithm to compute this metadata that utilizes far less disk space. In our internal tests with large volumes, we are now seeing less than 5% cache space requirement which is a 3X improvement. To that effect, we are updating our requirement for cache space accordingly to be less than 5% of the size of data being backed up.

Increased retention

Azure Backup has increased the number of recovery points for cloud backups. This enables flexible retention policies to meet stringent compliance requirements such as HIPAA for large enterprises. For those who are curious on what the new maximum number of recovery points is, it has been increased to 9999 from 366.

And more…

These are additional optimizations to improve the customer experience when protecting large data sources to Azure. The service has increased the timeouts across various phases of backup process to ensure long running jobs complete reliably. Cataloguing the backup data has been decoupled from the mainstream process of uploading the backup data to handle cloud backups more efficiently. Azure Backup leverages the DPM writer service to determine incremental changes for cloud backups. There were intermittent failures in this service which have now been hardened for better reliability of cloud backups.

Download the latest Azure Backup agent and get started with these great features!

Related links and content

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning


Join the conversation