Compose and manage data services at scale
- Create, schedule and manage data pipelines
- Visualise data lineage
- Connect to on-premises and cloud data sources
- Monitor data pipeline health
- Automate cloud resource management
Customers using Azure Data Factory
Ingest and prepare
Use Azure Data Factory, a globally deployed data movement service in the cloud, to ingest data from multiple on-premises and cloud sources. Then, connect to on-premises sources with a data management gateway, and use Data Factory to get your data where it needs to go. Prepare and partition your data as you ingest it, or apply pre-processing steps.
Transform and analyse
Schedule and manage your data transformation and analysis process. Choose from a wide range of processing services and put them into managed data pipelines to use the best tool for the job. For example, add a Hadoop processing step for big or semi-structured data, a stored procedure invocation step for structured data, a machine-learning step for analytics or insert your own custom code as a processing step in any pipeline.
Publish and consume
Use data pipelines to transform raw data into finished or shaped data that’s ready for consumption by BI tools or applications. Use Data Factory to get your valuable data where it needs to go for consumption by your on-premises or cloud applications and services.
Monitor and manage
Monitor and manage your network of data pipelines at a glance to identify issues and take action. Easily understand when data arrives, where it comes from and how, and when it’s ready for processing. Set up alerts to monitor your overall Data Factory service health. Data Factory saves you time and money by automating your data pipelines with on-demand cloud resource management.