Boosting Jira Cloud app development with Apache Ignite

Making development of distributed and scalable Jira Cloud app backends easier with Apache Ignite and Atlassian Connect Spring Boot

Peter Gagarinov
10 min readJul 16, 2020


Written by Peter Gagarinov, & Ilya Roublev — July 16, 2020

Suppose you need to implement a Jira Cloud app with a backend written in Java that should be capable of executing both long-running calculation-heavy issue processing jobs and lightweight requests from different Jira users in parallel. And suppose that this backend needs a data persistence layer with the following properties:

  • integrates with Java natively
  • highly available and horizontally scalable
  • fault-tolerant and distributed
  • supports distributed ACID transactions
  • provides data persistence on disk
  • supports SQL for distributed data
  • supports in-memory data storage
  • can be used for caching, preferably via JCache API
  • supports user-defined distributed jobs
  • provides automatic failover both for connections to database and for distributed jobs
  • provides Transparent Data Encryption for safety reasons
  • supports native configurations for deployment in Kubernetes
  • free and open-source
  • has an optional paid technical support

Sure enough you came across some of these requirements in your projects but what if you need to fulfill all of them at the same time? In that case the number of solutions to this problem becomes more limited. However giving a full survey of all possible solutions is outside of scope of this paper. Instead, we would like to show how Apache Ignite distributed database combined with Spring Boot framework can be used to build services with outlined properties, including our own service Alliedium AIssistant Cloud app built as Jira Cloud app that leverages machine learning (ML) to make Jira issue processing easier. This choice of technologies has proven to be a practically viable solution that runs equally well on both AWS EKS as our production environment and our local Kubernetes cluster deployed via Kubespray used as our development environment. Moreover, we use Apache Ignite for both data persistence and fast in-memory issue processing via distributed jobs:

One of possible layouts for the distributed Jira app backend based on Apache Ignite

Besides having all the properties listed above, Apache Ignite:

  • provides a key-value API (as being a key-value database internally) while SQL is implemented as an extra layer on top of that
  • supports multiple partitioning and replication modes including “distributed replicated” mode that distributes data across the cluster nodes keeping a few replicas of each data partition on other nodes. This way of making a distributed database fault-tolerant storing data is widely used by other distributed storage technologies (GlusterFS for instance)
  • supports failover for distributed transactions
  • natively supports classical distributed machine learning (ML) training algorithms and has a deep integration with TensorFlow

Making Atlassian Connect Spring Boot and Apache Ignite best friends

Since we develop our add-on in Java it is only natural that we use Atlassian Connect Spring Boot. But it turns out that despite the fact that Apache Ignite and Atlassian Connect Spring Boot are both implemented in Java the the integration between them is not as trivial as it may seem at the first glance. Let us show how this can be done and, hopefully, save a lot of time for other developers who may also want to use the same technology combination.

In fact, Atlassian Connect Spring Boot uses Liquibase under the hood to manipulate all the information on tenants (called Jira clients) using the Connect app via a special table named atlassian_host (below you can see the structure of this table). Each time a Jira cloud administrator installs or uninstalls the add-on or changes the license, this table is updated via SQL queries executed via JDBC. Apache Ignite supports JDBC, but with some restrictions. Besides, if we delegate the responsibility to create atlassian_host to Atlassian Connect Spring Boot, then we lose the possibility to configure this table with parameters specific for Apache Ignite, for example, to set the number of additional replicas for each piece of data (in Apache Ignite they are named backups) or to turn encryption on. Thus, it is good to leave to Atlassian Connect only the right to manipulate with atlassian_host tuples (their inserting, updating or deleting) retaining its creation to ourselves. But the latter means that atlassian_host has to be created before Spring Boot application is launched.

Using simple JDBC

All you need for using JDBC is described in Ignite JDBC Thin Driver manual. To start using the driver, just add ignite-core-{version}.jar to your application's classpath. If you use Apache Maven, then please add the following to pom.xml:


To make it possible to configure some parameters of atlassian_host you need to modify a query creating the table slightly. The example below shows how to configure the number of additional replicas for each piece of data (named backups in Ignite, given below by nBackups).

Connection conn = DriverManager.getConnection(jdbcUrl);
Statement stmt = conn.createStatement();
"CREATE TABLE IF NOT EXISTS atlassian_host (" +
"client_key VARCHAR(255) PRIMARY KEY," +
"public_key VARCHAR(512)," +
"shared_secret VARCHAR(255)," +
"base_url VARCHAR(255)," +
"product_type VARCHAR(255)," +
"description VARCHAR(255)," +
"service_entitlement_number VARCHAR(255)," +
"addon_installed BOOLEAN," +
"created_date TIMESTAMP," +
"last_modified_date TIMESTAMP," +
"created_by VARCHAR(255)," +
"last_modified_by VARCHAR(255)," +
"oauth_client_id VARCHAR(255)) " +
"WITH \"backups=" + String.valueOf(nBackups) + "\"");
"CREATE INDEX IF NOT EXISTS on atlassian_host (base_url)");

Unfortunately, it is not possible to turn encryption on in this way (we will work around this in the next section), because there is only a very limited list of parameters that may be set to configure the SQL queries like the one above (see here for details).

Further, we need to set some application properties used by Spring Boot to ensure the desired behavior. Below we set the necessary parameters in the file named application.yml.

First of all, we need to disable Liquibase at startup but leaving nevertheless the right for Liquibase to check that atlassian_host was created correctly. Besides, we need to comply with the restrictions of Apache Ignite for SQL. This is done by adding the following lines to application.yml (see here, here and here for details):

spring.jpa.hibernate.ddl-auto: validate

spring.liquibase.enabled: false
spring.liquibase.default-schema-name: PUBLIC

These settings are for Spring Boot 2.0.0 or later (if you use the older version of Spring Boot, please remove the prefix spring. in the last two lines, see also here for details). The last line is important. If it is missing, then Spring Boot makes an additional call to the database to determine the default schema name. However, since Ignite JDBC does not support call statements (see T321-4 here) we need to avoid such call by telling Spring Boot the schema name explicitly.

At last, we need to configure the access to Apache Ignite via JDBC (see also here for details):

spring.datasource.url: jdbc:ignite:thin://ignitehost:10800/
spring.datasource.driver-class-name: org.apache.ignite.IgniteJdbcThinDriver org.hibernate.dialect.H2Dialect

Please change ignitehost above to the DNS name of the Ignite service (for the case of Kubernetes see below). If the Ignite cluster is deployed locally, ignitehost should be replaced simply to localhost.

More configuration options using QueryEntities

The approach above is quite simple but it leaves no opportunity to perform a more subtle configuration of the Apache Ignite cache serving the atlassian_host table. Under the hood Apache Ignite uses key-value storage containers called caches and H2 database for making those caches look like plain relational database tables and executing SQL queries over Ignite caches. All these internal workings are concealed from the application developers, see here). As we have already mentioned encrypting all the data in our Ignite cluster is one of our requirements. Unfortunately, we cannot pass the data encryption parameters via JDBC. And here we run across the following seemingly difficult task: it is necessary not only to configure a table emulation for the cache according to our requirements but also make in the way that doesn't break the ability of Atlassian Connect Spring Boot to work with this table via Ignite JDBC Thin driver. Luckily this can be achieved via QueryEntity as follows (please see here and here what concerns additional parameters set for CacheConfiguration object below):

CacheConfiguration<?, ?> atlassianHostCacheCfg = 
new CacheConfiguration<>("SQL_PUBLIC_ATLASSIAN_HOST")
QueryEntity entity = new QueryEntity(
String.class.getName(), "ATLASIAN_HOST");
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("CLIENT_KEY", String.class.getName());
fields.put("PUBLIC_KEY", String.class.getName());
fields.put("SHARED_SECRET", String.class.getName());
fields.put("BASE_URL", String.class.getName());
fields.put("PRODUCT_TYPE", String.class.getName());
fields.put("DESCRIPTION", String.class.getName());
fields.put("SERVICE_ENTITLEMENT_NUMBER", String.class.getName());
fields.put("ADDON_INSTALLED", Boolean.class.getName());
fields.put("CREATED_DATE", Timestamp.class.getName());
fields.put("LAST_MODIFIED_DATE", Timestamp.class.getName());
fields.put("CREATED_BY", String.class.getName());
fields.put("LAST_MODIFIED_BY", String.class.getName());
fields.put("OAUTH_CLIENT_ID", String.class.getName());
List<QueryIndex> queryIndices = new ArrayList<>();
queryIndices.add(new QueryIndex("BASE_URL"));

That is it, all the rest is the same as in the previous section.

Use Ignite JDBC Client Driver to improve performance

Apache Ignite supports two types of clients — thin clients and thick clients. The official Apache Ignite documentation explains the difference as “A thin client is a lightweight Ignite client that connects to the cluster via a standard socket connection. It does not start in JVM process (Java is not required at all), does not become a part of the cluster topology, never holds any data and is not used as a destination for compute grid calculations” (see Apache Ignite Thin Clients manual for details). On the other hand thick clients are basically regular Apache Ignite nodes configured in a special way to work in a client mode. The article “Apache Ignite: Client Connectors Variety” explains these differences very well. It is known that in general thick client nodes have better performance than thin clients, but they require more resources and have stronger coupling with the rest of the cluster. Now let us follow the guidelines from JDBC Client Driver and make Atlassian Connect Spring Boot use thick JDBC client by changing application.yml file as follows:

spring.datasource.url: jdbc:ignite:cfg://transactionsAllowed=true@file:///path/to/ignite/config/client.xml
spring.datasource.driver-class-name: org.apache.ignite.IgniteJdbcDriver org.hibernate.dialect.H2Dialect

where /path/to/ignite/config needs to be replaced with the real path to Ignite client node configuration. The only tricky caveat to explain here is presence of transactionsAllowed=true in the URL string. The problem is that Apache Ignite supports ACID transactions, but yet only at key-value API level (though this restriction is planned to be dropped in future). This means that the JDBC driver will throw Transactions are not supported exception if transactionsAllowed=true is not given.

Caveats for using Apache Ignite with Spring Boot

Firstly, if you use Spring Boot 2.x.x (used by Atlassian Connect Spring Boot 2.x.x), it needs Spring Framework 5.x.x while Apache Ignite Spring has Spring Framework 4.x.x as a dependency (for example, Apache Ignite 2.8.1 uses Spring Framework 4.3.26). And if you like to use ignite-spring dependency (to load Ignite client configurations from XML files, for instance) it is necessary to pay close attention to the order of listing dependencies in pom.xml when Apache Maven is used. In such cases


should be placed strictly after all the dependencies connected both with Spring Boot, Spring Framework and Atlassian Connect Spring Boot.

Secondly, Apache Ignite Spring provides several useful features, namely, Spring Caching, Spring Data Repository Support and Spring Boot Extensions. And if you use Spring Boot Caching (or just have necessary dependencies in pom.xml), listing Apache Ignite Spring as another Maven dependency may lead to potentially unwanted and unexpected behavior. Namely, a separate Ignite node may be launched via Spring Boot just to provide this caching via Ignite. If you do not like this to occur, you can simply add the following line to application.yml file (see here for detailed explanation):

spring.cache.type: simple

Alternatively you could configure Spring Boot Caching in application.yml explicitly but the latter is outside of scope of this paper. In addition to serving as a Spring Cache provider Apache Ignite can also be used very efficiently outside of Spring Boot for flexible caching functionality (thanks to its in-memory architecture and key-value-based API).

Specifics for deployment Cloud app in Kubernetes

Apache Ignite cluster can be easily deployed and managed by Kubernetes — a popular container orchestration framework designed specifically for automated deployment, scaling, and management of containerized applications. The deployment can be either stateless or stateful. The latter assumes data persistence for Apache Ignite nodes running within Kubernetes pods. Let us skip the detailed explanation of how the proper deployment and configuration of Apache Ignite should be done in a cloud environment and instead share our experience outlining the basic steps that are key to the successful deployment.

Firstly, each Ignite cluster should be activated before its usage with its so-called baseline topology fixed. This means that all the server nodes used for data persistence have to be launched before activation. We found it convenient to run a special job in a separate Kubernetes pod waiting for all persistence-enabled Ignite server nodes to be launched and ready and then activating the cluster at the moment the nodes are online. Also it makes sense to create atlassian_host table right after this activation in the same job. The main reason for that is that both activation and creation of atlassian_host table should be done only once (when some Ignite nodes are restarted due to possible failures, the cluster is reactivated automatically as long as the nodes from the fixed topology are back online). The second reason is that the backend using Atlassian Connect Spring Boot should have atlassian_host table already created by the moment it starts.

Secondly, since Apache Ignite is deployed as Kubernetes service we need to change the connection string we use to locate Apache Ignite cluster from Atlassian Connect Spring Boot configuration. In case of JDBC thin driver we need to use jdbc:ignite:thin://ignite.default.svc.cluster.local:10800/ as the value for spring.datasource.url parameter (here we assume the application is deployed in the default Kubernetes namespace). Another assumption made here is that the Ignite service name is ignite, exactly as suggested here.

However, if you decide to use (and, in fact, there is no reason not to) JDBC Client Driver, then the connectivity settings provided by discoverySpi part of client.xml configuration file mentioned above need to be changed as follows (please use this client.xml file as a template):

<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
Enables Kubernetes IP finder and setting custom namespace name.
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="default"/>

where default needs to be replaced with the name of Kubernetes namespace used for deployment of your Jira Cloud app. Also, we recommend to place such configuration in ConfigMap as it would give you more flexibility for configuring Apache Ignite cluster in Kubernetes.


In this paper we have demonstrated how Apache Ignite and Atlassian Connect Spring Boot can be integrated to serve as a base for a scalable distributed backend. However, many interesting challenges when building distributed services and the ways to overcome them we left out of the scope of this paper on purpose as unrelated directly to Jira and Jira apps. Thus we still have a lot to share in a few subsequent papers.