Configure Jitsi — Open source web conferencing solution on AWS with Terraform

Prashant Bhatasana
AppGambit
Published in
5 min readJul 23, 2020

--

Terraform is so popular nowadays. Terraform enables you to create and manage infrastructure with code and codes can be stored in version control.

In this article, we are talking about How we can Setup our own version of this application on any AWS region.

You can check out the project on the Jitsi GitHub repository.

Jitsi Meet is a free and fully encrypted open source video conferencing service solution providing high quality and audio without subscription or the need to create an account.

Jitsi is a set of open-source projects that allow you to build a secure video conference system for your team. The core components of the Jitsi project are Jitsi VideoBridge and Jitsi Meet. There are free and premium services based on Jitsi projects, such as HipChat, Stride, Highfive, Comcast.

Jitsi Meet is the heart of the Jitsi family, it's an open-source JavaScript WebRTC application that allows you to build and deploy scalable video conference.

The tool provides features like:

  • Sharing of desktops, presentations, and more
  • Inviting users to a conference via a simple, custom URL
  • Editing documents together using Etherpad
  • Trading messages and emojis while video conferencing, with integrated chat.

Let’s start our Exercise!

Pre-Requisites To Creating Infrastructure on AWS Using Terraform

  • We require AWS IAM API keys (access key and secret key) for creating and deleting permissions for all AWS resources.
  • Terraform should be installed on the machine. If Terraform does not exist you can download and install it from here.

Amazon Resources Created Using Terraform

Networking Module:

  1. AWS VPC with 10.0.0.0/16 CIDR.
  2. Multiple AWS VPC public subnets would be reachable from the internet; which means traffic from the internet can hit a machine in the public subnet.
  3. Multiple AWS VPC private subnets which mean it is not reachable to the internet directly without NAT Gateway.
  4. AWS VPC Internet GateWay and attach it to AWS VPC.
  5. Public and private AWS VPC Route Tables.
  6. AWS VPC NAT Gateway.
  7. Associating AWS VPC Subnets with VPC route tables.

Server Module:

  1. Auto-scaling group for ECS cluster with a launch configuration.

2. ECR container registry.

3. ECS cluster with task and service definition.

4. ECS container with EC2 as a container instance that runs docker-compose and pulls Docker images from the Docker Hub.

5. A load balancer distributing traffic between the containers.

Let’s talk about the Terraform deployment of VPC.

Please follow this article for more detail of the Networking module that creates our VPC, Subnets, and Other Networking Assets.

Let’s talk about the terraform deployment of ECS.

Autoscaling Group

The Autoscaling group is a collection of EC2 instances. The number of those instances is determined by scaling policies. We will create an autoscaling group using a launch template.

Before we will launch container instances and register them into a cluster, we have to create an IAM role for those instances to use when they are launched:

I used a special kind of filter ob AMI which find ECS-optimized image with preinstalled Docker. EC2 m4.xlarge instances will be launched.

If we want to use created, named ECS cluster we have to put that information into user_data, otherwise our instances will be launched in default cluster.

Basic scaling information is described by aws_autoscaling_group parameters.

Having an autoscaling group set up we are ready to launch our instances and database.

Elastic Container Service

ECS is a scalable container orchestration service that allows us to run and scale dockerized applications on AWS.

resource "aws_ecs_cluster" "this" {
name = "${var.environment}_cluster"
}

Cluster name is important here, as we used it previously while defining launch configuration. This is where newly created EC2 instances will live.

To launch a dockerized application we need to create a task — a set of simple instructions understood by the ECS cluster. The task is a JSON definition that can be kept in a separate file:

The family the parameter is required and it represents the unique name of our task definition.

The last thing that will bind the cluster with the task is an ECS service. The service will guarantee that we always have some number of tasks running all the time:

resource "aws_ecs_service" "this" {
name = "${var.environment}"
task_definition = "${aws_ecs_task_definition.this.id}"
cluster = "${aws_ecs_cluster.this.arn}"
load_balancer {
target_group_arn = "${aws_lb_target_group.this.0.arn}"
container_name = "web"
container_port = "${var.container_port}"
}
launch_type = "EC2"
desired_count = 1
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
}

Now, We discussed all resources that Terraform creates.

It’s time to run the Terraform script

Clone this terraform repository.

  • Now go to the directory and run command.
cp sample.terraform.tfvars terraform.tfvars
  • Update terraform.tfvars file variables values.

Note: we need to run following command because recently jitsi update their security policy so we need to pass strong password

  • After that run sample.terraform.tfvars
./gen-passwords.sh

This command updates the value of the following variables.

JICOFO_COMPONENT_SECRET
JICOFO_AUTH_PASSWORD
JVB_AUTH_PASSWORD
JIGASI_XMPP_PASSWORD
JIBRI_RECORDER_PASSWORD
JIBRI_XMPP_PASSWORD

These variables are used in the task definition file.

References:

Thank you for reading, if you have anything to add please send a response or add a note!

--

--

Prashant Bhatasana
AppGambit

AWS Community Builder | AWS Certified | Terraform Associate | SR. DevOps Engineer, Love to work with #AWS #GCP #Terraform #Jenkins #Kubernetes #Docker #Ansible