Transfer data from Jira to SharePoint using Matillion ETL Tool

Keerthana
BI3 Technologies
Published in
5 min readApr 21, 2022

This blog will show you how to transfer data from Jira to Snowflake and then Unload them in S3 Bucket and then transfer those to SharePoint using Matillion ETL Tool.

Job to transfer data from Jira to SharePoint.

The Steps to achieve the above process are :

Step1: Create a new orchestration job

In Matillion, create a new orchestration job by Right-clicking the “default” option.

Step2: Get data from Jira and load it in the Snowflake table (Using the “JIRA QUERY” component):

Jira Query Component: It uses the JIRA API to retrieve data from JIRA and load it into a table. Here the data is staged, So the table has to be reloaded each time.

  1. Select the “JIRA Query” component in the component tab and then drag & drop it onto the canvas, attaching the “start” component.

2. In the Jira Query Component Property tab, Enter the respective data.

3. Choose the mode as Advanced.

Basic: It only has the Table level access. This mode will let you use the settings from Data Source, Data Selection, and Data Source Filter parameters.

Advanced: This mode will allow us to write an SQL-Query, so we can get the data from all schemas and tables.

4. For the Authentication Method, Choose User/API TOKEN.

5. Enter your Username, API Token, and URL.

6. In SQL Query, enter the select query for your Jira Table.

Example: Select * from <Table_Name>

7. Enter the values of Warehouse, Database, Schema for the target table we are going to store our Jira data.

8. Enter the target Table name, this component will create the table with that name in our given database.

9. Select Stage as custom and Stage Platform as Existing Amazon S3 Location.

10. Enter the Stage credentials

Sample Jira Query Entry:

Step3: Unload the data from the Snowflake table into S3 Bucket (Using the “S3 UNLOAD” Component):

S3 Unload component: Creates files on a specified S3 bucket, and loads them with data from a table or view.

  1. Select the S3 Unload component in the component tab and then drag & drop it onto the canvas, attaching the Jira Query component.

2. In the S3 Unload component Property tab, Enter the respective data.

3. Enter Stage as Custom.

4. In S3 Object Prefix, Enter the path for the S3 Bucket file Location.

Example: s3://bucket_name/folder/subfolder/

5. The table data will be stored in the location you mentioned in the S3 object Prefix.

6. In File Prefix, enter the prefix of the file name you want to store in the S3 Bucket.

7. Enter the values of Warehouse, Database, Schema, and target table to get the data from Snowflake.

8. In file type, we can select the file type we create in the snowflake, or we can customize the file type by selecting the custom option.

Sample S3 Unload Entry:

Step4 : Transfer the file from S3 Bucket to SharePoint (Using “PYTHON SCRIPT” Component):

Python Script Component: This is used for setting variable values, running quick queries on the target database using database cursors, and connecting to other AWS services.

  1. Select the Python Script component in the component tab and then drag & drop it onto the canvas, attaching the S3 Unload component.
  2. By using the python language in the python script component, we can transfer the file from the s3 bucket to SharePoint.
  3. For this, you need to install the boto3 and SharePoint Package, the command for installation are:
pip install boto3
pip install sharepoint

The code for transferring a file from s3 Bucket to SharePoint is given below:

from shareplum import Site,Office365 
from shareplum.site import Version
import boto3
s3 = boto3.resource('s3')
client = boto3.client('s3')
bucket = 'YOUR BUCKET NAME'
filepath = 'YOUR FILE PATH'
authcookie = Office365('YOUR SHAREPOINT URL', username='YOUR USER NAME',password='YOUR PASSWORD').GetCookies()site = Site('https:// 'YOUR SHAREPOINT URL'/sites/sharedocs/',version=Version.v2016,authcookie=authcookie);
folder = site.Folder('Shared Documents/'YOUR FOLDER NAME'')
obj = client.get_object(Bucket=bucket, Key=filepath) file_name='TABLE1.CSV'
body = obj['Body'].read()
folder.upload_file(body, file_name)

If your file path in s3 bucket is like, Bucket_name_s3/folder_name/sub_folder_name/TABLE1.csv

Then, Your bucket name should be: Bucket_name_s3

Your file path should be like: folder_name/sub_folder_name/file_name.csv

Finally, Run the Job by right-clicking on the canvas and selecting “Run job”.

We have successfully transferred the data from Jira to SharePoint. You will get the file with Jira data in your SharePoint.

The Job created in this blog enables users to load data from Jira to SharePoint using the components Jira, S3 Unload, and python script components.

About us:

Bi3 has been recognized for being one of the fastest-growing companies in Australia. Our team has delivered substantial and complex projects for some of the largest organizations around the globe and we’re quickly building a brand that is well known for superior delivery.

Website : https://bi3technologies.com/

Follow us on,
LinkedIn : https://www.linkedin.com/company/bi3technologies
Instagram :
https://www.instagram.com/bi3technologies/
Twitter :
https://twitter.com/Bi3Technologies

--

--