GitLab CI/CD for a React Native App
A comprehensive guide on how to self-host an Expo server and setup a GitLab CI/CD pipeline to deploy a React Native App.
Lately I managed to setup a GitLab CI/CD deployment pipeline for a React Native app. In this article I would like to share my insights and learnings, along with a proposition for a working .gitlab-ci.yml
configuration whose operability has been proven in my latest studies project.
Goal
The goal is to deploy a React Native App to your own server in a GitLab CI/CD pipeline. As a result, new code gets immediately deployed as pushed to GitLab. Users will be able to start the app on iOS or Android by scanning a QR-Code or opening a hyperlink, given they have installed the Expo App (more on this in the Conceptual Overview). Whenever they open the app, Expo will fetch the most recently deployed code.
The benefit of this approach is to be able to publish app changes on a very frequent basis, without going through the validation process required by the App Store / Play Store. This is especially useful for early stage projects, prototypes, MVP’s or closed user group tests, all being popular techniques in agile software development.
There is effectively no formal obstacles who’d defer the publishment of your app with the described approach (which would break an automated pipeline), all obstacles are of technical nature and overcoming these obstacle is subject of this article.
As pointed out in this response, the method described in this article is self-hosted and works for Android and iOS as opposed to deploying to the Expo services. (See limitations)
Conceptual Overview
Expo is a powerful development toolkit for React Native App development and will play key role in the realisation of the deployment. Without going deep into details of how Expo or React Native work (which should be perfectly covered in their documentation), the basic idea of Expo is to serve the compiled JavaScript and Assets of your app from a Webserver. When you run expo start
from your development environment, Expo spawns an Expo Developer Server and a React Native Packager Server (see How Expo Works). If you then scan the QR-Code generated by Expo with your phone, the Expo App will connect to these servers, download the compiled JavaScript/Assets and simulate your app within the Expo App — given your phone is in the same network as the development machine. The idea is to spawn the two Expo Servers on a public IP, thus making the app publicly available (i.e. available for all phones with internet access).
Let’s face the whole process that will take place:
- You’ll push code to your repository, triggering the GitLab CI/CD Pipeline.
- The build job of the pipeline will pull the code from the repository and build a Docker Image. This docker image will contain the whole React Native Code as well as the
expo-cli
required to runexpo start
. As a result, every system with docker installed can serve your app. Switching servers becomes a breeze. Furthermore other developers can easily run your app on their own machine. - Finally the build job pushes the Docker Image to the Container Registry.
- The deploy job will connect to your server via SSH…
- …where the previously created Image will be pulled from the Registry…
- …And a new Container will be created. Inside the container
expo start
will be executed which spawns the React Native Packager, the Expo Development Server and the Expo Developer Tools (a web interface for inspecting server usage/logs). Each of these 3 services will bind to a host’s port. Expo Development Server will bind to port 19000 which will determine the URL users will open on their smartphone:exp://server_ip:19000
(or if you have a domainexp://server_domain:19000
). More on Expo Deep Links. - Smartphone Users will be able to run the latest app by opening previous mentioned link (Expo App required).
- This is an optional but yet useful step to access the Expo Development Tools from your local machine. Nevertheless, accessing it on a remote server is not trivial and will be discussed in chapter Expo Developer Tools.
Prerequisites
What you’ll need:
- A server with public IP and SSH access (servers are available at $5/month nowadays, one less 🍺 a month and you’re in).
- Docker installed on your server.
- A GitLab Repository with an available GitLab Runner (the Runner needs to have network access to your server).
- You will also need a Container Registry to push Docker Images. GitLab has a built-in Container Registry but depending on your GitLab instance, the feature might be unavailable. Of course you can also use other Container Registries, such as Dockerhub or your private registry.
Realisation
I don’t want to waste any more words, let’s jump into action.
.gitlab-ci.yml
This is the CI/CD configuration I’m using in my studies projects. Each section will be elaborated and explained below. Nevertheless, I won’t discuss basic GitLab CI/CD concepts such as stages or docker:dind. I have given a detailed overview of these concepts here.
image: alpine:latest
stages:
- build
- deploy
variables:
REGISTRY_IMAGE_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG/frontend:$CI_COMMIT_SHORT_SHA
REGISTRY_IMAGE_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG/frontend:latest
DOCKER_TLS_CERTDIR: ""
build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t $REGISTRY_IMAGE_LATEST -t $REGISTRY_IMAGE_COMMIT .
- docker push $REGISTRY_IMAGE_LATEST $REGISTRY_IMAGE_COMMIT
deploy:
image: alpine:latest
stage: deploy
script:
- chmod og= $ID_RSA
- apk update
- apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker pull $REGISTRY_IMAGE_LATEST"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker container stop my_app && docker container rm my_app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker run -d -p 19000:19000 -p 19001:19001 -p 19002:19002 --privileged --network my_network --name my_app $REGISTRY_IMAGE_LATEST"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker exec my_app bash -c 'apt-get update && apt-get install --yes iptables && sysctl -w net.ipv4.conf.all.route_localnet=1 && iptables -t nat -I PREROUTING -p tcp --dport 19002 -j DNAT --to 127.0.0.1:19002'"
only:
- develop
variables: Predefined CI/CD variables are used to build the Registry URLs and save them in new variables REGISTRY_IMAGE_COMMIT
and REGISTRY_IMAGE_LATEST
. They will have the structure registry_url/branch_name:commit_sha
and registry_url/branch_name:latest
respectively. These variables will be used in the build
job when the images are built and pushed. It’s just a demonstration of tagging a docker image with multiple tags. latest
will override the last image, while commit_sha
will create a unique image for each commit. DOCKER_TLS_CERTDIR
is required, see this issue.
build: In this job we will login to the GitLab registry using the predefined $CI_BUILD_TOKEN
variable, which is a token valid for GitLab registry login for the duration of the pipeline execution. If you work with a registry outside GitLab, you can simply deposit your login credentials as custom GitLab variables and use the variable keys for authentication. We will build the Docker Images based on the Dockerfile and push it using the two tags stored in our variables (the Dockerfile will be subject of the next chapter).
deploy: In this job we will execute several commands on our server via SSH. Notice that the pipeline is executed on the GitLab Runner. We need to find some way to connect to our server and execute the commands necessary for deployment; which is in this case SSH. $ID_RSA
is my custom GitLab variable that contains a valid private key for authentication on my server. As you can see from the ssh
documentation, the -i
option stands for identity_file
which requires you to specify the path to the private key file. So I have created a file_type variable, with the contents of my private key file. GitLab will now create a file with the specified variable content and store its path in a variable called $ID_RSA
. Leave me a comment if you are completely lost with SSH public key authentication, we’ll sure figure out a solution. In my other article I have also covered a way of SSH authentication using a password (which is considered less secure). chmod og=$ID_RSA
is a requirement for SSH to work; SSH demands to set permissions for the private key file of others (o
) and group (g
) to: no permission (i.e. only the owner shall have any permission). apk update
and apk add openssh-client
simply install the SSH client. Successive commands follow, each having the structure ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "some command"
. They all execute some command
in a remote shell session on our server. StrictHostKeyChecking=no
prevents from being asked, whether we trust the remote server. $SERVER_USER
and $SERVER_IP
are my custom GitLab variable that contain the server’s login username (related to my $ID_RSA
private key file) and server’s IP address. The following commands will be effectively executed on the remote server:
1. docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.
2. docker pull $REGISTRY_IMAGE_LATEST
3. docker container stop my_app && docker container rm my_app || true
4. docker run -d -p 19000:19000 -p 19001:19001 -p 19002:19002 --privileged --name my_app $REGISTRY_IMAGE_LATEST
5. docker exec my_app bash -c 'apt-get update && apt-get install --yes iptables && sysctl -w net.ipv4.conf.all.route_localnet=1 && iptables -t nat -I PREROUTING -p tcp --dport 19002 -j DNAT --to 127.0.0.1:19002'"
- Login to registry, known stuff.
- Pull latest image to server (remember, we previously pushed it to the registry in the build job).
- Stop and remove old container (if it exists).
- Start a new container using the new pulled image.
19000
,19001
and19002
are the ports we want to bind from the container to the host (remember, the 3 serversexpo start
spawns). The--privileged
is required to execute step 5. The container is namedmy_app
. - This step is only required if you want to access the Expo Developer Tools from your local machine. The explanation will be given in chapter Expo Developer Tools.
That was basically the whole deployment magic. The GitLab pipeline now re-spawns a container on your server, serving the latest React Native code.
Dockerfile
An essential part of deploying the container as described in chapter .gitlab-ci.yml is the Dockerfile which is the recipe to build the underlying Docker Image. Following the Dockerfile of my studies project:
FROM node:latest
WORKDIR /usr/src/app
COPY . .
RUN npm -g config set user root
RUN npm install -g expo-cli
RUN npm install
EXPOSE 19000
EXPOSE 19001
EXPOSE 19002
CMD ["expo", "start", "--no-dev", "--minify", "--offline", "--non-interactive", "--tunnel"]
COPY . .
will copy the host’s working directory into the container’s working directory. The host working directory will by default be your repositories root directory. Since we don’t change the working directory in our build
job, COPY
will eventually copy the repository content into the container. Adjust the path, if the react native code lives in a subdirectory of your repository. The only requirement is that the React Native root directory is copied to the containers working directory, because that’ll be the place where expo start
will be executed.
RUN npm -g config set user root
resolves access denied errors when installing the expo-cli
javascript library inside a container, see this stackoverflow answer.
RUN npm install -g expo-cli
installs the expo-cli
, required run expo start
Please note that it is quite lavish to install the expo-cli
in every pipeline execution, potentially slowing down the pipeline performance. A simple solution for this problem is to pre-build an image with expo-cli
installed, push it to the registry and inherit from this image instead of node:latest
.
RUN npm install
installs the javascript libraries required by your app.
CMD
is the command executed after the container has been created. This is the famous expo start
we talked so much about 🥳.
Expo Developer Tools
The Expo Developer Tools spawn on 127.0.0.1:19002
when running expo start
. It is a web interface to see the Expo logs (instead of going to the console). The Developer Tools are convenient because you only need to open a URL. The only downside is, that the Developer Tools explicitly listen on 127.0.0.1:19002
and not 0.0.0.0:19002
. As a consequence, the Developer Tools only work when running and accessing from the same machine and not in a client/server setup like we have with the public server. In order to access the Developer Tools on a remote machine we need to start an SSH tunnel on the local machine, which tunnels all requests from 127.0.0.1:19002
(on your host) to 127.0.0.1:19002
(on the server). With said tunnel, one can access the Developer Tools on his local machine by opening 127.0.0.1:19002
in his browser.
On unix based systems, such a tunnel can be created using an ssh agent. An example of starting/stopping such a tunnel is described as follows:
- Start the tunnel:
ssh -L 19002:localhost:19002 -N -f -l john ip_address
- Find open tunnel processes:
ps aux | grep ssh
- Kill a tunnel (use the process ID from step 2):
kill <id>
The command represented in the first step, starts a tunnel from localhost:19002
to localhost:19002
on the server ip_address
. john
is the user, used for SSH authentication. In order for this to work, you need to have a valid private key on your host machine (valid means it’s authorised by the john
user, i.e. its corresponding public key is deposited in the ~/.ssh/authoried_keys
file of the john
user).
Of course you can also use other tools to create a tunnel (e.g. PuTTY for Windows).
Under the hood: Expo runs inside a docker container, thus special network configuration had to be arranged in order to redirect network traffic from the server host to the container. In the 5th step of the CI/CD deploy
job an iptable
entry was added to the container, which redirects 0.0.0.0:19002
requests to 127.0.0.1:19002
. Then the :19002
port of the container was bound to :19002
of the server host. In order to realise this, the container had to be executed in --privileged
mode (source for implementation stackexchange.com and superuser.com).
Summary
Let’s summarise the procedure:
- A push to the Repository triggers the GitLab pipeline.
- The build job will build and push a Docker Image to the container registry.
- The deploy job will connect to our server via SSH and start a new container using the latest image from the registry.
- Our app can now be started on any Android/iOS smartphone with internet access and the Expo App installed, by opening the Expo Link:
exp://server_ip:19000
(or if you have a domain:exp://domain_name:19000
). More about Expo Deep Links.
A convenient thing would now be to make a simple landing page, with instructions on how to install your React Native App, as well as a hyperlink to your Expo Link and QR-Code representation of the link (generate QR-Code with qr-code-generator.com). The hyperlink is useful for users who open the landing page on their smartphone; they can click the link. The QR-Code is useful for users who open the landing on their computer; they can scan the QR-Code from the screen with their smartphone. Here’s an example of such a landing page (it’s in German but you get the idea): http://besmarth.ch. The landing page will simplify user on-boarding and the pipeline will feed the users with the most up to date code altogether leading to faster improvement cycles.