• <1 minute

Enhance Azure SQL Data Warehouse performance with new monitoring functionality for Columnstore

This blog will show you how to monitor columnstore health in SQLDW using the DMV.

Azure SQL Data Warehouse (SQL DW) is a SQL-based petabyte-scale, massively parallel, cloud solution for data warehousing. It is fully managed and highly elastic, enabling you to provision and scale capacity in minutes. You can scale compute and storage independently, allowing you to range from burst to archival scenarios.

Azure SQL DW is powered by a Columnstore engine to provide super-fast performance for analytic workloads. This is the same Columnstore engine that is built into the industry leading SQL Server Database from Microsoft. To get full speed from the Azure SQL DW, it is important to maximize Columnstore Row Group quality. A row group is the chunk of rows that are compressed together in the Columnstore. In order to enable easier monitoring and tuning of row group quality we are now exposing a new Dynamic Management View (DMV).

What is a High Quality Row Group?

A row group with 1 million rows (1,048,576 rows to be precise) is of ideal quality and under the right circumstances, this is what Azure SQL DW will create. Under sub-optimal conditions such as insufficient memory, row groups with fewer number of rows get created. This can adversely impact the compression quality as well as increase the per row overhead of ancillary structures for row groups. This is turn can dramatically reduce the performance of your queries (note: SQL DW now prevents creation of row groups < 10,000 rows).

How to Monitor Row Group Quality?

Azure SQL DW now has a new DMV (sys.dm_pdw_nodes_db_column_store_row_group_physical_stats) for exposing information about physical statistics of row groups for a Columnstore table. Don’t be intimidated by the long name – we do have a convenient view (vCS_rg_physical_stats) that you can use to get information from this DMV. The key piece of information is the trim_reason_desc that tells whether a row group was prematurely trimmed or not. If it was not trimmed, then it is of ideal quality (trim_reason_desc = NO_TRIM). If it was trimmed, then the trim_reason_desc is set to the trim reason such as MEMORY_LIMITATION or DICTIONARY_SIZE. The example screenshot below shows a snapshot of a table with poor quality row groups due to various trim reasons.

DMVBlogSS1

How to Improve Row Group Quality?

Once you identify trimmed row groups there are corrective actions you can take to fix them depending upon what trim_reason_desc says. Here are some tips for the most significant ones:

  • BULKLOAD: This is what trim reason is set to when the incoming batch of rows for the load had less than 1 million rows. The engine will create compressed row groups any time there are greater than 100,000 rows being inserted (as opposed to inserting into the delta store) but will set the trim reason to BULKLOAD. To get past this, consider increasing your batch load window to accumulate more rows. Also, re-evaluate your partitioning scheme to ensure it is not too granular as row groups cannot span partition boundaries.
  • MEMORY_LIMITATION: To create row groups with 1 million rows, a certain amount of working memory is required by the engine. When available memory of the loading session is less than the required working memory, row groups get prematurely trimmed. The columstore compression article explains what you can do to fix this, but in a nutshell the rule of thumb is to use at least a mediumrc user to load you data. You would also need to be on a sufficiently large SLO to have sufficient memory for your loading needs.
  • DICTIONARY_SIZE: This indicates that row group trimming occurred because there was at least one string column with very wide and/or high cardinality strings. The dictionary size is limited to 16MB in memory and once this is reached the row group is compressed. If you do run into this situation, consider isolating the problematic column into a separate table.

The screenshot below shows a copy of the same table with row group quality fixed by following the recommendations to avoid trimming due to MEMORY_LIMITATION.

DMVBlogSS2

Next Steps

Now that you know how to monitor your Columnstore row group quality, you can maintain it for optimal performance both pro-actively as part of your regular loads as well as fix quality issues if they arise. If you are not already using Azure SQL DW, we encourage you to try it out for your Business Intelligence and Business Analytics workloads.

Learn More

Check out the many resources for leaning more about Azure SQL DW, including:

What is Azure SQL Data Warehouse?

SQL Data Warehouse Best Practices

MSDN Forum

Stack Overflow Forum