Data Factory adds schema import, connection tests and custom sink ordering to data flows
Updated: 07 February, 2020
Import Schema from debug cluster
You can now use an active debug cluster to create a schema projection in your data flow source.
Available in every source type, importing the schema will override the projection defined in the dataset. The dataset object will not be changed. All previously existing methods of creating and modifying schemas are still valid and compatible.
For more information on the projection tab, see the data flow source documentation.
Test connection on Spark Cluster
You can use an active debug cluster to verify that data factory can connect to your linked service when using Spark in data flows. This is useful as a sanity check to ensure that your dataset and linked service are valid configurations when used in data flows.
Custom sink ordering
If you have multiple destinations in your data flow, you can now specify the write order of your data flow. Non-deterministic by default, enabling custom sink ordering allows for sequential writes of your data flow sinks.
Learn more about custom sink ordering.
Read the Tech Community post for more information.