Integrating JIRA with your AWS Services (instead of building a UI)

Abhisek Roy
Credit Saison (India)
9 min readMay 25, 2022
Fig: JIRA

How many times did you work on a backend project only to realise that there were some functionalities which required human interaction? This could be an action like a file upload, selection of specific values from a drop-down menu or entering data points (with certain constraints). You might be working on this project for in-house usage, to make the lives of your operations team easy, or for collecting data from your partners (business partners, data partners, or merchants).

Building a full-fledged UI may take your product or feature a lot longer to go live. Creating a UI is usually a long process, especially if you are doing it from scratch. It would also involve hosting services in the cloud, creating a website, buying a domain name, and a lot more. A full UI may not even be required as long as the end user is not a retail customer. You may not need one in some B2B use cases either.

So what do you do in this case? Well, here’s where Jira comes in and solves the problem for you. I will not be going in depth into the use cases of Jira, but in case you are already using Jira to create and track tickets but are not aware of its “APIs” or “cloud” capabilities, then this is the article for you. We recently used Jira as a “user input tool” in some of our projects and realised that it was really helpful in situations where we did not need a separate UI to be built. The users in our case were internal teams, but you could have a different definition for “user” as well.

Since we use AWS as our cloud service provider, the examples and integrations that are shown in this article are specific to AWS services, but you could set up the same in any cloud service where you have a compute since all we are doing is calling APIs.

Now there are two ways in which you can integrate Jira tickets with your cloud systems–

  1. You can hit Jira’s own URL and, through a query written in JQL (JIRA Query Language), you could find the tickets that you want. You can add filters like tickets of a certain project and type, those that are in a particular stage, etc. The filtering options are endless. On AWS, this is usually done by creating an AWS-Lambda and having it run at regular intervals (using a CRON job). Every time the AWS-Lambda executes, it fetches the tickets satisfying the Jira query that is set inside it and performs the operations that are required.
  2. In case you do not want to check tickets periodically, and you need a real time system, you also have an option to have Jira hit your API endpoint on certain events. For our use case, we set things up such that whenever a ticket of a certain kind is created, an API is hit. You could also set a similar trigger when a certain type of ticket goes through a specific stage change (for example, a file upload system based ticket moving from open to uploaded state).

Let’s get into the code

The first way in which you can interact with Jira tickets is by querying Jira. In the example below, we are fetching all the tickets–

  1. Belonging to the project “FILE_UPLOAD_PROJECT”
  2. Having at least one attachment and
  3. Which are in “Uploaded” status.

You will find the query defined right after the import statements. Queries in JQL are simple enough and just a 5minute read should enable you to write even the most complex queries and get the tickets that you need.

Once you have the Jira tickets, get the AWS Bucket name in which you will be pushing the attachments as well as the password for your Jira account (both of which are saved in environment variables). After this, you need to initialize the Jira client. This client is usable by importing the Jira library offered in Python and can be used for almost all the ways in which you might want to interact with Jira. Once this is done, we create an s3 client using boto3, and that’s all that is required in the initialization phase.

Next, we come to the lambda execution. This lambda will be triggered by a cron job, and hence the event variable is unused. You need not pass any JSON data through the event when triggering the lambda via a cron. We use the following setting in our cloud formation templates:

Events:
CheckJiraTickets:
Type: Schedule
Properties:
Schedule: rate(30 minutes)
Name: CheckJiraTickets
Description: This event would trigger the lambda every 30 min
Enabled: True

This makes the lambda run every 30min with an empty event. As soon as the lambda is hit, it first fetches all the tickets for the given search query with a max limit of 1000. You may or may not use the max limit parameter. There are also other optional parameters for the search_issues() function which you can check in the documentation.

Once you get all the issues, fetch all the fields for the Jira ticket and create a dictionary of field name vs id. This is done since you cannot fetch data for parameters using the name of the parameter (such as “File Type” or “Upload Date”). Instead, you will need the unique id assigned by Jira to your custom parameter. This dictionary will help in the next steps when you fetch the values assigned to the custom parameters.

Now that you have all the data that is needed from Jira, loop through the Jira tickets that were fetched. For each ticket, we will get the ticket id (that would look something like TICK-123), and fetch the values set for the custom parameters in the ticket. For us, these would be the fields, file_type and file_upload_date.

import os
import time

import boto3
import jira

# Search query written in SQL
SEARCH_QUERY = 'project=FILE_UPLOAD_PROJECT & attachments is not EMPTY & status = Uploaded'

# Fetching params from environment variables
BUCKET_NAME = os.getenv("BUCKET_NAME")
JIRA_PASSWORD = os.getenv("JIRA_PASSWORD")

# Initialize Jira Client
jira_client = jira.JIRA(
'https://company-name.atlassian.net',
basic_auth=('company-email-id@domain.com', JIRA_PASSWORD)
)

s3_client = boto3.client('s3')


def lambda_handler(event, context):
# Get all tickets of type FILE_UPLOAD_PROJECT with attachments and where status is Uploaded
issues_in_proj = jira_client.search_issues(SEARCH_QUERY, maxResults=1000)
all_fields = jira_client.fields()

# Get a map of all the fields in JIRA and their corresponding ids.
field_name_map = {jira.field['name']: jira.field['id'] for jira.field in all_fields}

for issue in issues_in_proj:
# Get Jira Issue object with given issue id
jira_issue = jira_client.issue(str(issue))

# Get values of parameters from JIRA ticket
file_type = getattr(jira_issue.fields, field_name_map["File Type"])
file_upload_date = getattr(jira_issue.fields, field_name_map["Upload Date"])

# Upload the file into s3 bucket
for attachment in jira_issue.fields.attachment:
# Create file key at which the uploaded file needs to be put in the bucket
timestamp_in_millisec = (round(time.time() * 1000))
file_name = str(file_type) + "_" + file_upload_date + "_" + str(timestamp_in_millisec) + ".csv"
s3_client.put_object(Body=attachment.get(), Bucket=BUCKET_NAME, Key=file_name)

# Change status of ticket
transitions = jira_client.transitions(jira_issue)
transitions_map = {transition['name']: transition['id'] for transition in transitions}
jira_client.transition_issue(issue, transitions_map['Processing Complete'])
return True

Once you have the values of the custom fields, loop over the attachments for each ticket, and upload them to the bucket. Generate the key for uploading each file using the custom parameters that were fetched earlier as well as the time_stamp of that particular moment. Make sure no two files have the same file key, else they would end up getting re-written on top of one another in the S3 bucket.

Once the work is done, we also need to change the status of the ticket so as to ensure that it is not picked up again when the cron runs the next time. To do this, we will need to get the transition states available for the Jira ticket and create a dictionary of the stage name vs the ids. This is similar to what we did for the parameters since everything in Jira is accessed by id and not name. Once we have the map, we change the status of the ticket to “Processing Complete”.

Fig: Example of a custom JIRA field of type date picker

When this ticket was being created on the backend in JIRA, we made sure for the upload date parameter the user can only select a date, and that a ticket can’t be created without uploading at least one file. You can also add such constraints to ensure easier usability and so that people don’t end up making mistakes. You can also have parameters with drop down options.

Next, we will show the only difference in the lambda code, when you want the ticket to hit your API directly. For this, you will need to create an API endpoint using AWS API Gateway.

Fig: Jira integration using API-Gateway and Lambda

You need to create an x-api-key for auth, and add a usage plan for the API as well. Once this is done, you need to configure JIRA to hit the API and use the x-api-key while doing so at certain events. This can be the creation of a certain type of Jira ticket or a specific state transition of a particular type of Jira ticket.

issue = event.get("issue").get("key")
jira_issue = jira_client.issue(str(issue))

As shown in the code block above, once your lambda is hit, you can fetch the issue id, and then get the jira_issue in the above manner. Then you can repeat whatever was shown in the last example. The only difference is that in this case, your lambda will always be triggered for a single Jira ticket only and you will not have to loop over multiple tickets.

How to set things up in JIRA?

We showed you how the code works, but that is just half the work done. You will also need to set up automation in JIRA in order for certain actions to hit your API. If you are going to query JIRA instead, you don’t need to do that, but you will still need to get the id and password that you will use in your JIRA client.

Fig: Automation rule that will trigger a hit to given URL whenever a ticket is created.

To create an automation rule as shown in the picture, you will need to go to the Automation page through the left-hand-side options panel. There, you will have to create a new rule. You can use any trigger, such as:

  • field changed value
  • issue assigned
  • issue created
  • issue deleted
  • approval completed
  • issue transitioned
  • manual trigger
  • and more
Fig: Creating a new rule in Automation

Once you create the trigger for the rule, you can add actions like send email, send web request, and more in the next stage. Your action can be conditional as well. You can also create branching actions. For example, if a ticket is moved to stage A, email will be sent. If it is moved to stage B, a Slack notification will be sent. You can also chain a set of actions one after another. There are a lot of customizations and integrations available for you to test out.

Fig: Automating API call on Ticket Creation

The image above shows how you can configure an automation rule to hit your API endpoint for the lambda that you set up. All you need to do, is add the URL and headers. You can set auth tokens in headers as well.

Such JIRA-based solutions can not only be used for setting up internal workflows, but may also come in handy where you are building something within a short timeline. You can always set up an independent front end for the features later on if needed. I have used such solutions for dealing with problems like:

  • File upload
  • Maker checker pages
  • Approval based ticketing systems

So go ahead, go wild and make development cycles smaller by using such handy integrations.

--

--