API Management & Security Series — 3Scale — Part 3 — Multitenant installation for large deployments

Tommer Amber
8 min readJul 15, 2020

--

Multitenant Architecture in High-Level

About multitenancy

“Multitenancy” refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants.Wikipedia

In case of 3Scale we can refer to multitenant deployment as the right way to [on one hand] enable freedom to the developers to manage their API GWs without the “tight coupling” to the 3Scale management team, while simultaneously keeping the centralized management of the tool as a whole in the hands of the 3Scale administrators;

I’m a big believer in understanding by examples so here’s a good one:

In some organizations we have two hundred separate departments, each develop a different API. In that organization, we also have a 3Scale that supposed to let the organization as a whole the option to manage it’s exposed APIs to the public in a safe manner.

In the beginning, that company only had two APIs, and the 3Scale administrators had a lot of free time after the first exposer of the APIs; after a while, we got to where we are now — 200 APIs and not much of free time left (quite the opposite actually).

The wrong way to approach that situation is to keep things as they are — let the 3Scale administrators do all the work on 3Scale and let every sub-department wait while they expose some other unrelated API.

The right way to do it is quite simple;

Provision each application department a sperate 3Scale tenant, all are managable from a single Master, so they can whatever configurtion they desire for & by themselves (the self-service approach, while following the security network compliance rules), and let the 3Scale administrators to their job — updates, upgrades, provision new features, etc.

That is exactly what I'm going to show you.

Let’s do some installation

You’ll be surprised by how easy it is to do this kind of deployment, all we need is some basic knowledge of Ansible playbooks & roles and we’re all set. In case you don’t have that knowledge I insist that you’ll learn it right away;

Now that you're all familiar with ansible let’s begin.

When I first tried to do this kind of deployment I found that the GitHub project on the matter and the ansible-galaxy role are not so updated to work on OCP 4.x and that some added-explanation on them could help me a lot, so follow me and I hope I’ll do a better job at explaining everything you may need, and show the fixes that I found that work for me. If you still need some help you can always post an issue on both of these pages

Tip 1: First and foremost it’s important to mention that the ansible-galaxy page says that you need a minimum of ansible 2.4. I did the whole installation on ansible 2.8 and it worked just fine so make sure you follow that rule.

First, let’s clone the ansible role using the ansible-galaxy command;

$ ansible-galaxy install gpe_mw_ansible.3scale_multitenant --force -p $HOME/.ansible/roles

That will make a copy of the role files under $HOME/.ansible/roles.

That role will need some python packages so let’s install them as well;

# dnf install python3-lxml
# dnf install python3-openshift

Tip 2: The role requires you to be logged-in to the OCP cluster with admin permissions.

Finally, let’s create the playbook;

$ echo "
- hosts: all
become: false
gather_facts: False
vars_files:
roles:
- gpe_mw_ansible.3scale_multitenant
" > /tmp/3scale_multitenant.yml

3Scale Multitenant Ansible Role — Explainer

I think that a professional doesn’t just run whatever command he/she sees in some guide but instead, he/she will have a better long-term experience with whatever product they will work with if they will understand how it works in the background — even a basic knowledge makes a huge difference.

So with that two sense let’s examine the file tree of the ansible role we just installed (it should be at ~/.ansible/role/gpe_mw_ansible/3scale_multitent

$ tree

├── defaults
│ └── main.yml
├── meta
│ └── main.yml
├── README.adoc
├── tasks
│ ├── main.yml
│ ├── pre_workload.yml
│ ├── remove_workload.yml
│ ├── tenant_loop.yml
│ ├── tenant_mgmt.yml
│ ├── wait_for_deploy.yml
│ └── workload.yml
└── templates
└── limitrange.yaml
  • defaults/main.yaml — Default variables values to be used by the other tasks; They can be overwritten in various ways, for example: in ansible run command, template files, etc.
  • tasks/main.yaml — Main role task that runs all the others.

Use whichever IDE/Text editor you like to examine the tasks/main.yaml file to see the order of the tasks. First the tasks/pre_workload.yaml starts and does some verifications before the other tasks start, such as making sure that the local machine is logged in to the OCP cluster, that the logged-in user has access to admin projects, and it also creates the 3Scale multitenant project. After that the tasks/workload.yaml starts and does the AMP installation, And last but not least the tasks/tenant_loop.yaml starts creating the sub-tenants under the “master” one that only the 3Scale admin should have access to.

To make sure everything will run as expected we should pass as parameters the desired variables values that the ansible role will work with.

# See Exaplainer after the note
$ export OCP_AMP_ADMIN_ID=api0
$ export API_MANAGER_NS=3scale-api0
$ export RESUME_CONTROL_PLANE_GWS=false
$ export SUBDOMAIN_BASE=<change me>
$ export use_rwo_for_cms=false
$ export rht_service_token_user=<change me>
$ export rht_service_token_password=<changeme>
# Please read the following note on the SMTP variables
$ export smtp_host=<change me> # change to your smtp provider. example: smtp.sendgride.com
$ export smtp_port=587
$ export smtp_authentication=login
$ export smtp_userid=<change me>
$ export smtp_passwd=<change me>
$ export smtp_domain=<change me> # example: gmail.com
# Make sure they are both valid
$ export adminEmailUser=<change me> # example: tom
$ export adminEmailDomain=<change me> # example: gmail.com
# Relevant for the tenents provisioning
$ export CREATE_GWS_WITH_EACH_TENANT=true
$ export ocp_user_name_base=user
$ export tenant_admin_user_name_base=api
$ export use_padded_tenant_numbers=false

Note!! The SMTP environment variables are only relevant if you intend to use the user-registration capability in the 3Scale Developer portal. if that is not the case just init them with “” [Empty string]. In case you do need that feature enabled, you can use the following SMTP provider that has a great free plan a really clear user guide.

Explainer

  • OCP_AMP_ADMIN_ID — The OCP user who manages that the 3scale-multitenant AMP’s namespace
  • API_MANAGER_NS — The namespace that the AMP will be installed on. It’s very important to make sure it does not already exist before we run the ansible-playbook.
  • RESUME_CONTROL_PLANE_GWS — specifics to AMP that the default staging and production APIcasts are not intended for data-plane connections.
  • SUBODMAIN_BASE — OCP cluster sub-domain URL.
  • use_rwo_for_cms — For RWO (Persistent volumes permissions) environments only such as AWS Labs that have only gp2 Storageclass it is possible to deploy the AMP on RWO PVs only. (The ansible task edits the 3Scale template in order to enable that option).
  • rht_service_token_user — RHT Registry Service Account name as per: https://access.redhat.com/terms-based-registry It requires RH Registry user.
  • rht_service_token_password — RHT Registry Service Account Password as per https://access.redhat.com/terms-based-registry It requires RH Registry user.

Note!! In order to avoid the need for the last two configuration options is to edit the role to pull the 3Scale template from the community registry. I won't demonstrate it in that guide.

  • CREATE_GWS_WITH_EACH_TENANT — If we want that the tenants will be independent we would like to create a separate GW for each of them.
  • ocp_user_name_base — the playbook will generate OCP users with permissions to manage the tenant dedicated GWs; they’ll be generated in a sequence. In our case: user1, user2, …. etc.
  • tenant_admin_user_name_base — that one has a bad name. it actually configures the users that will be able to login to the 3Scale tenant-specific UI; they’ll be generated in a sequence. In our case: api1, api2, …. etc. We will get the password in the output file after the playbook will finish.
  • use_padded_tenant_numbers — in some lab cases there’s a need to add padding to the ocp_user_name_base (user01, user02 etc.). We won't use it

Tasks fixes

I found some role tasks that require editing to work properly, and I’d advise you to follow them to avoid unnecessary bugs.

Fix #1

Edit the following file

$ vim ~/.ansible/roles/gpe_mw_ansible.3scale_multitenant/tasks/workload.yml

Search for the word “Download” in the file (case sensitive), and comment the “when” condition in that section. Save the file when you finish.

After the fix

Fix #2

Edit the following file

$ vim ~/.ansible/roles/gpe_mw_ansible.3scale_multitenant/tasks/tenant_loop.yml

Search for “Create threescale-portal-endpoint” (case sensitive), and replace the existing shell command with the following command:

oc create secret generic apicast-configuration-url-secret \
--from-literal=password={{ THREESCALE_PORTAL_ENDPOINT }} \
-n {{ gw_project_name }}

Note!! The original command is deprecated in OCP 4.x.

Run the playbook

Just run the command……

$ ansible-playbook -i localhost, -c local /tmp/3scale_multitenant.yml \
-e"ACTION=apimanager" \
-e"subdomain_base=$SUBDOMAIN_BASE" \
-e"OCP_AMP_ADMIN_ID=$OCP_AMP_ADMIN_ID" \
-e"API_MANAGER_NS=$API_MANAGER_NS" \
-e"smtp_port=$smtp_port" \
-e"smtp_authentication=$smtp_authentication" \
-e"smtp_host=$smtp_host" \
-e"smtp_userid=$smtp_userid" \
-e"smtp_passwd=$smtp_passwd" \
-e"is_shared_cluster=false" \
-e"rht_service_token_user=$rht_service_token_user" \
-e"rht_service_token_password=$rht_service_token_password" \
-e"use_rwo_for_cms=$use_rwo_for_cms"

Note!! “is_shared_cluster” parameter is responsible to limit the resources that the AMP will use by defining cluster quotas. If it is “true” CPU limit: 30 cores and RAM limit: 30 Gi. If it is “false” CPU limit: 6 cores and RAM limit: 12 Gi.

Pro-Tip Open a new terminal and run “oc get pods -w” while the playbook runs, in order to make sure everything deployed correctly. If some of the pods crash for some reason — try to rollout it before ansible task timeout to avoid rerunning the ansible-playbook. For example: `oc rollout cancel dc/zync-que` and then `oc rollout retry dc/zync-que`

When the playbook finishes successfully we can deploy the other tenants for our users. Run the following command to do so;

$ ansible-playbook -i localhost, -c local /tmp/3scale_multitenant.yml \
-e"ACTION=tenant_mgmt" \
-e"subdomain_base=$SUBDOMAIN_BASE" \
-e"API_MANAGER_NS=$API_MANAGER_NS" \
-e"start_tenant=1" \
-e"end_tenant=20" \
-e"adminEmailUser=$adminEmailUser" \
-e"adminEmailDomain=$adminEmailDomain" \
-e"create_gws_with_each_tenant=$CREATE_GWS_WITH_EACH_TENANT" \
-e"ocp_user_name_base=$ocp_user_name_base" \
-e"tenant_admin_user_name_base=$tenant_admin_user_name_base" \
-e"use_padded_tenant_numbers=$use_padded_tenant_numbers" \
-e"rht_service_token_user=$rht_service_token_user" \
-e"rht_service_token_password=$rht_service_token_password"

Note!! The start_tenant” and “end_tenant” define the number of tenants that will be generated by that playbook. They don't have to start with 1 in case you already have some tenants deployed, and they also don’t have to be limited by 20 tenants. It’s just an example.

After the tenant provisioning completes, you will see messages similar to the following at the end of the ansible standard out:

ok: [localhost] => {
"msg": [
"tenant_output_dir:<dir path>,
"tenant_provisioning_log_file = <log file location>,
"tenant_provisioning_results_file = <results file location>,
"start and end tenants = 1 20",
"create API Gateways for each tenant = true"
]
}

In the results file, you’ll have all the credentials to access each tenant.

Note!! If you’d like, at that point you can create your own custom tenant users and OCP users to manage the different tenants; You're not committed to use the ansible-playbook made users specifically.

--

--