Skip to main content
NOW AVAILABLE

Data Factory adds schema import, connection tests, and custom sink ordering to data flows

Published date: February 07, 2020

Import Schema from debug cluster

You can now use an active debug cluster to create a schema projection in your data flow source.

Available in every source type, importing the schema will override the projection defined in the dataset. The dataset object will not be changed. All previously existing methods of creating and modifying schemas are still valid and compatible.

For more information on the projection tab, see the data flow source documentation.

Test connection on Spark Cluster

You can use an active debug cluster to verify data factory can connect to your linked service when using Spark in data flows. This is useful as a sanity check to ensure your dataset and linked service are valid configurations when used in data flows.

Custom sink ordering

If you have multiple destinations in your data flow, you can now specify the write order of your data flow. Nondeterministic by default, enabling custom sink ordering allows for sequential writes of your data flow sinks.

Learn more about custom sink ordering.

Read the Tech Community post for more information.

  • Azure Data Factory
  • Features

Related Products