In development: query interleaving for Azure Analysis Services
Updated: July 11, 2019
Azure Analysis Services delivers enterprise-grade BI semantic modeling capabilities with the scale, flexibility, and management benefits of the cloud by helping you transform complex data into actionable insights. Enterprise BI systems need to support high user concurrency, which means there can be lots of queries submitted close to each other. We are pleased to announce that we are working on the query interleaving feature, which allows system configuration to improve the user experience in high-concurrency scenarios
By default, the Analysis Services tabular engine works in a "first in first out" (FIFO) fashion with regards to CPU. This means, for example, if one expensive/slow storage-engine query is received and closely followed by 2 otherwise fast queries, the otherwise fast queries can potentially get blocked waiting for the expensive query to complete. This is represented by the following diagram which shows Q1, Q2 and Q3 as the respective queries, their duration and CPU time.
With query interleaving, concurrent queries can share CPU resources, so fast queries are not blocked behind slow ones. The time it takes to complete all three queries is still about the same, but Q2 and Q3 are not blocked till the end.
Query interleaving is intended to have little or no performance impact for queries run in isolation; a single query can still consume as much CPU as it does using the FIFO model.
Short-query bias can be configured with query interleaving. This means fast queries (defined by how much CPU each query has already consumed) can be allocated a higher proportion of resources than long-running queries, allowing them to complete in a reasonably short time. In the following illustration, the Q2 and Q3 queries are deemed “fast queries” and therefore allocated more CPU than Q1.
We hope you can see that query interleaving with short query bias will add great value to enterprise BI systems on Azure Analysis Services!