How to deploy Azure Data Factory, Data Pipelines & its entities using PowerShell

Rahul Vangari
Egen Engineering & Beyond
6 min readJan 27, 2020

In this blog, we will walk through automation script for deploying Azure Data Factory (ADF), data pipelines & its entities (i.e. Linked Services, Data sets) to copy data from Azure virtual machine to Azure Storage account.

Create Data Pipelines on Azure Data Factory using PowerShell

In this article, we will perform following steps:

  1. Create Azure Data Factory
  2. Create Azure VM Linked service
  3. Create Azure Storage Linked service
  4. Create source File Share data set
  5. Create target Storage data set
  6. Create a pipeline with a copy activity to move the data from file to storage account.

Prerequisites:

Azure PowerShell:

Azure PowerShell is a set of cmdlets(lightweight command used in windows powershell environment) for managing Azure resources directly from the PowerShell command line. It simplifies and automates to create, test, deploy Azure cloud platform services using PowerShell.

Before starting to install and run the latest Azure PowerShell module, use the following command to sign into the Azure portal. It will prompt you to enter username & password.

Connect-AzAccount

Create Azure Data Factory:

Azure Data Factory is a cloud-based data integration service that allows to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation.

The Data Factory services allows us to create Pipelines which help to move and transform the data, Pipeline can have one or more activities to perform move/transform data.

Let’s start creating data factory

  1. To Create an Azure resource group, run the following commands

2. To Create an Azure Data Factory run the New-AzDataFactory cmdlets

Finally we can now see Data Factory created on azure portal.

In the next steps, we will create data pipeline & its dependencies for copying data from azure virtual machine to Azure table storage.

ADF pipeline

Before creating a pipeline, we need to create a data factory entities i.e in order to copy file data from Azure VM directory file path to Azure Storage Table, we have to create two linked services: Azure Storage and Azure VM File system. Then, create two data sets: Azure storage output data set (which refer to the Azure Storage linked service) and Azure VM FileShare Source data set (which refer to the Azure VM directory path linked service). The Azure Storage and Azure VM File system linked services contain connection strings that Data Factory uses at run time to connect to your Azure Storage and Azure VM respectively and also in order to run copy activities between a cloud data store, we have to install Integration run time. The installation of Self hosted integration run time needs on an on-premise machine or an azure virtual machine. I have installed on virtual machine & please follow this Microsoft documentation to install IR via Azure Data Factory UI

Linked Services:

Linked services are much like connection strings, which define the connection information needed for Data Factory to connect to external resources i.e. ADF manages activities across a variety of data sources such as Azure Storage, Azure SQL, Azure Data Lake, SSIS, on-premises systems, and more. To communicate with these sources, ADF requires a connection definition known as a Linked Service.

Create Azure VM Linked Service:

AzureVM Linked Service is to link VM Directory file path to our data factory. This is used as Input (Source) store. This linked service has the AzVM connection information that the Data Factory service uses at run time to connect to it.

  1. Define a JSON named ‘FileCopy_VM_LinkedService’ with connection information properties

2. Run the following Set-AzDataFactoryV2LinkedService cmdlets to create Linked service for AzureVM

Create Azure Storage linked service:

Azure Storage Linked service is to link storage account to our data factory ‘DemoADF’. This is used as output(sink) store. The linked service has the Storage account connection key information that the Data Factory service uses at run time to connect to it.

  1. Define a JSON named ‘StorageLinkedService’ with connection string properties

Replace account name & account key with your Azure Storage account

2. Run the Set-AzDataFactoryV2LinkedService cmdlets to create Linked Service for Azure storage account

We can now see two Linked Services created on data factory.

Data set:

Data set is a named view of data that points /references the data want to use in our activities as input and output. Data sets identify data within different data stores, such as tables, files, folders, and documents. i.e. a data set is not the actual data, It is just a set of JSON instructions that defines where and how the data to store.

In our scenario, we need to create two data sets one for pointing to Azure storage linked service(output) & another one pointing to AzureVM Linked service(input).

Create Azure VM FileShare dataset:

  1. Define Source Data Set json named ‘Source_VM_Dataset’. This data set specifies the directory .txt file path on AzureVM to which the Source data is to pull from.

2. Run the following Set-AzDataFactoryV2Dataset cmdlets to create Source data set

Create Azure storage Table data set

  1. Define Output DataSet json named ‘Output_Storage_DataSet’. This data set specifies the Azure Table Storage on storage account to which the data is to be copied.

2. Run the following Set-AzDataFactoryV2Dataset cmdlets to create output data set

We can now see source & target data sets created on data factory.

Pipeline:

A pipeline is a logical grouping of activities that together perform a task. The pipeline allows you to manage the activities as a set instead of each one individually. i.e. pipeline operates on data to transform & to analysis data on top of it.

  1. Define a JSON named ‘Demo_Copy_pipeline’ that consist of copy activity that uses source & target data set to copy the data from VM to Storage account.

2. Run the following Set-AzDataFactoryV2Pipeline cmdlets to create pipeline with copy activity

We can now see pipeline created on data factory.

To invoke/run the pipeline we can add schedule triggers or event triggers, to demonstrate now, we will invoke the pipeline using powershell cmdlets.

Pipeline invoke generates a run id for each specific run & we can see successfully executed pipeline run from ADF monitor tab & final target output on Azure storage account.

Monitor tab

Also, the data on the final destination target table on Azure Table storage account:

Azure Table Storage

Summary:

In this blog, we have seen Azure Data Factory concepts and the ways we could create data pipelines, Linked Services, data sets and invoke using PowerShell. If you want to read more about Azure PowerShell & Data Factory refer to below documentation

https://docs.microsoft.com/en-us/powershell/azure/?view=azps-3.3.0

https://docs.microsoft.com/en-us/azure/data-factory/introduction

If you have any feedback & opinion, please leave a comment, add me on LinkedIn

--

--