Zum Hauptinhalt wechseln

 Subscribe

We are pleased to announce several enhancements to Azure SQL Data Warehouse. In addition to improvements in stability and reliability, we have added several new features, many of which are driven directly from customer feedback. If you have feedback on features you would like to see in future versions, please use our Feedback page.

Azure Active Directory (AAD) authentication

Data access management can now be greatly simplified with Azure Active Directory, which hosts almost 9.5 million directories and 600 million user accounts. With AAD authentication, you can manage the identities of database users and other Microsoft services in one central location. This seamless integration between the services now enables you to:

  • Allow access to your data warehouse to your end users without having to create individual user names and passwords.
  • Offer single sign on for your database using your domain security groups/users.
  • Enable token-based authentication for applications connecting to SQL Data Warehouse.
  • Eliminate the storing of passwords by enabling Windows authentication and other forms of authentication supported by Azure Active Directory.
  • Eliminate the proliferation of user identities across database services.

For more details, see Connecting to SQL Data Warehouse By Using Azure Active Directory Authentication.

Large row support

Support for large rows has been a frequent ask from our users. This update adds support for rows larger than 32K and data types over 8K. Large row support adds support for varchar(max), nvarchar(max) and varbinary(max). The addition of these data types makes it even easier to migrate to the cloud by allowing you to use more data types from your existing table definitions.

In this first iteration of large row support, there are a few limits in place which will be lifted in future updates. In this update, loads for large rows is currently only supported through Azure Data Factory (with BCP), Azure Stream Analytics, SSIS, BCP or the .NET SQLBulkCopy class. PolyBase support for large rows will be added in a future release. Similar to Azure SQL Database, support for these new large data types, like varchar(max), are limited to Heap and Clustered Index Tables. Large rows are supported for Clustered Columnstore tables, it’s just the large data type support on Clustered Columnstore that will come later.

Transaction safety limits

In order to guard against long rollbacks, transaction size limits have been implemented. The limits are generous enough to allow you to execute large operations that may run for over an hour, but small enough to prevent a long running rollback that could run for many, many hours. This safety limit was put in place to prevent conditions which could turn into a long database outage, like if a user tries to pause during a long running transaction. As a general best practice, it is good to limit transaction sizes both for performance and safety, and there are several simple steps to achieve this. For more information on how to optimize your workloads by limiting transaction sizes, see our best practices guide for transactions.

Improved Visual Studio support

Visual Studio now supports automatic installation and updates of the latest SQL Server Data Tools (SSDT) version for SQL Data Warehouse support. This improved integration means that automatic updates of SSDT will now occur when new versions are released, eliminating the need to manually search and install SSDT. There have been a few fixes to SSDT recently which improve support for SQL Data Warehouse. To ensure that you have all of those fixes, be sure to use the latest version of SSDT, which is currently 14.0.60311.1. To upgrade to the latest SSDT version, from Visual Studio, click on Tools / Extensions and Updates… / Updates / Product Updates and then look for Microsoft SQL Server Update for database tooling. Future versions of SSDT will be adding more features, like database project support, which are supported for SQL Database but not yet SQL Data Warehouse.

For more details on how to use Visual Studio with SQL Data Warehouse, see Connect to SQL Data Warehouse with Visual Studio.

T-SQL support

Several T-SQL data types and commands previously only available in SQL Server are now available in SQL Data Warehouse. These features add functionality and reduce some of the migration work between the two platforms.

Rand()

This release also includes the RAND() function that returns a pseudo-random float value from 0 to 1 exclusively. The following query shows two randomly generated numbers:

SELECT RAND(CHECKSUM(NEWID())) as Rand, RAND(8) as RandSeed;

rand

Checksum() and Binary_Checksum()

When migrating your database from one SQL platform to another, the CHECKSUM() and BINARY_CHECKSUM() functions are a useful way to validate the data transfer between platforms. The difference between the two functions is the handling of string data types. For example, CHECKSUM will return the same value for mixed case while BINARY_CHECKSUM will return different values.

SELECT
    CHECKSUM('case') as ChecksumLower,
    CHECKSUM('CASE') as ChecksumUpper,
    BINARY_CHECKSUM('case') as ChecksumBinaryLower,
    BINARY_CHECKSUM('CASE') as ChecksumBinaryUpper;

checksum

Numeric, UniqueIdentifier and NEWID()

Two new data types have been added to SQL Data Warehouse: numeric and uniqueidentifier. The numeric data type is an equivalent to decimal and will provide for easier migration. The uniqueidentifier is a GUID data type which offers a method to store GUIDs as well as a simplifying migration for databases using this data type. In conjunction with uniqueidentifier, support for the NEWID() function has been added, which allows you to generate a unique GUID. Here are some T-SQL examples for these new features:

CREATE TABLE NewFeaturesTable
(
    Id UNIQUEIDENTIFIER NOT NULL,
    BigText NVARCHAR(MAX) NULL
)
WITH (HEAP);

DECLARE @ID UNIQUEIDENTIFIER = NEWID();
DECLARE @TEXT VARCHAR(MAX) = REPLICATE('This text will be long...',1000)
INSERT INTO NewFeaturesTable (Id, BigText) VALUES (@ID, @TEXT);
 
SELECT
    Id,
    DATALENGTH(BigText) AS [Length],
    CHECKSUM(*) AS [CheckSum]
FROM NewFeaturesTable;

newid

New management functions

When viewing security principles in a database, you can now check the server role of the user with the IS_SRVROLEMEMBER() function. This function returns 0 for false, is not a member of the role, and 1 for true. In the example below, the user is not a member of the sysadmin role.

SELECT IS_SRVROLEMEMBER('sysadmin', 'ContosoHacker') as IsMember;

is_srvrolemember

Database properties

The sys.database_service_objectives view allows you to query for the Database Edition and Service Objective for all databases on the logical server which can host several SQL Databases and SQL Data Warehouses. This DMV should be queried from the logical master database.

SELECT a.name, b.edition, b.service_objective
FROM
    master.sys.databases a
    INNER JOIN master.sys.database_service_objectives b
        ON a.database_id = b.database_id
ORDER BY 1;

service_objectives

When managing your SQL Data Warehouse you may want to validate your current usage of DWUs as well as the max allowed size. To query these properties, use the DATABASEPROPERTYEX() function. The example below demonstrates common properties you might be interested in.

SELECT
    DATABASEPROPERTYEX('DemoDB', 'Edition') As Edition,
    DATABASEPROPERTYEX('DemoDB', 'ServiceObjective') as ServiceObjective,
    DATABASEPROPERTYEX('DemoDB', 'MaxSizeInBytes') as MaxSizeInBytes,
    DATABASEPROPERTYEX('DemoDB', 'Collation') as Collation;

databasepropertyex

Archival data compression

This release introduces archival compression for Clustered Columnstore tables. This optional compression algorithm allows you to further compress data as a way to reduce the cost for storing your data. The tradeoff of the cost savings is increased CPU utilization when querying the data. Therefore, the recommendation is to use this feature for older data that you do not expect to query frequently. The below example demonstrates how archival compression can be applied to an older partition of data in your table with the ALTER TABLE command.

ALTER TABLE FactSales
REBUILD PARTITION = 1 WITH (DATA_COMPRESSION = COLUMNSTORE_ARCHIVE);

We want your feedback

In our feature planning we want to give priority to features that our users want most. We would love to hear from you on what features you would like us to add. Please let us know on our UserVoice feedback site what features you want most. Users who suggest or vote for feedback will receive periodic updates on their request and will be the first to know when the feature is released. Another way to give us feedback is our feedback survey.

Learn more

We have many resources for learning more about SQL Data Warehouse. Here are a few: 

  • Explore

     

    Let us know what you think of Azure and what you would like to see in the future.

     

    Provide feedback

  • Build your cloud computing and Azure skills with free courses by Microsoft Learn.

     

    Explore Azure learning


Join the conversation