Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.RSS feed
Azure Data Factory added several new features to mapping data flows this week: Import schema and test connection from debug cluster, custom sink ordering.
Azure Data Factory users can now build Mapping Data Flows utilized Managed Identity (formerly MSI) for Azure Data Lake Store Gen 2, Azure SQL Database, and Azure Synapse Analytics (formerly SQL DW).
Azure Synapse Analytics introduced a new COPY statement (preview) which provides the most flexibility for high-throughput data ingestion.
Azure Data Factory copy activity now supports resume from last failed run when you copy files between file-based data stores to ease large volumes of data ingestion and migration.
Azure Data Factory now supports SFTP as a sink and as a source. Use copy activity to copy data from any supported data store to your SFTP server located on-premises or in the cloud.
Azure Data Factory expands parameterization capabilities to make data flows even more reusable and scalable.
Azure Data Factory copy activity now supports preserving metadata during file copy among Amazon S3, Azure Blob, and Azure Data Lake Storage Gen2.
Azure Data Factory wrangling data flows are now in preview.
You now have the ability to run your Azure Machine Learning service pipelines as a step in your Azure Data Factory pipelines. This allows you to run your machine learning models with data from multiple sources (more than 85 data connectors supported in Data Factory).
Mapping Data Flows feature is now generally available in Azure Data Factory
Azure Data Factory now supports Azure Database for PostgreSQL as a sink. Use the copy activity feature to load data into Azure Database for PostgreSQL from any supported data source.
Load data faster with new support from the Copy Activity feature in Azure Data Factory. Now, if you’re trying to copy data from any supported source into SQL database/data warehouse and find that the destination table doesn’t exist, Copy Activity will create it automatically.
Azure Data Factory now supports copying data into Azure Database for MySQL. Use the Copy Activity feature to load data into Azure Database for MySQL from any supported data sources.
Azure Data Factory now provides built-in data partitioning to copy data from Netezza.
Azure Data Factory has added the ability to execute custom SQL scripts from your SQL sink transformation in mapping data flows. Now you can easily perform options such as disabling indexes, allows identity inserts, and other DDL/DML operations from data flows.
Gantt views are now available for monitoring data factory pipelines.
Create dependent pipelines in your Azure Data Factories by adding dependencies among tumbling window triggers in your pipelines.
Azure Data Factory Mapping Data Flows provides a code-free design environment for building and operationalizing ETL data transformations at scale. Now, the ADF team has added parameter support for Data Flows, enabling flexible & reusable data flows that can be called dynamically from pipelines.
Azure Data Factory upgraded the Teradata connector with new feature adds and enhancement, including built-in Teradata driver, out-of-box data partitioning to performantly ingest data from Teradata in parallel, and more.
A new logging mode in Diagnostic Settings for an Azure Logs target, starting with Azure Data Factory, will allow you to take advantage of improved ingestion latency, query performance, data discoverability, and more!
Azure at Ignite
Read the Azure blog for the latest news.Blog
Tell us what you think of Azure and what you want to see in the future.Provide feedback
Azure is available in more regions than any other cloud provider.Check product availability in your region