Pama Team: Under the hood

Alexander Ozerov
13 min readSep 27, 2023

--

Part 3 of 3. Architecture, technologies and realization

I strongly believe that flawless and elegant architecture is not born at once — it evolves with the product, follows it on its heels. If it evolves in some other way, it can only mean one thing — it does not meet the challenges and needs of the project.

In the case of Pama Team, it worked out right: I was able to scale the original configuration quite well, because at the very beginning of the project I relied on containerization with docker.

If you zoom out and look at the picture above, you can identify 8 elements of the architecture (fig. 15)

fig.15. Solution architecture

1. Teams space horizontal scaling system

Built on docker server, under the hood — containers of web and backend application

2. Container orchestration system, management console Pama Console

Allows you to manage the team lifecycle: start, stop and configure groups of containers related to the team, and is also responsible for health check of applications — that they are not only running, but also in normal operation mode.

3. DevOps subsystem

Represents a set of bash scripts for managing the lifecycle of containers and infrastructure elements

4. Data storage subsystem

When designing the system, I relied on the flattest possible data structure and the use of NoSQL, so queries are very fast: much faster than we are used to in Jira. This is achieved thanks to the absence of complex and capacious joins.

For static file storage I use Minio, a very easy to configure system with a wide range of customization options. I had trouble configuring minio_is_secure = true at the Minio level, but I solved this problem by encrypting traffic at the Nginx level.

5. Routing and proxying

Built on Nginx, works fast enough, but has the disadvantage of having to restart the service when adding a new team’s route path

6. Onboarding system for new teams

It is a Wizard for setting up and creating a new team space. Implemented on Microsoft Blazor Server technology, uses the Pama Console API. Uses OTP codes, Captha, email libraries.

7. Main website

A static website with information about the platform, implemented in JS and BootStrap 4. For its creation I used Blocs — a very handy tool if you want to get results quickly.

8. AI-based assistant

Big language models with their infrastructure stand apart because they have little overlap with the platform infrastructure: they have a separate server, their own stack (python+c) and they use AI Hub Hugging Face.

As you may have noticed, because a set of docker containers are run for each team, applications are not multitenant and do not scale horizontally. There are pros and cons to this. The pros are that a) the team always has instances of their application reserved for them, which is slightly faster than scaling a single application horizontally and b) reliability, as the applications are not able to negatively impact each other in any way.

The disadvantages include: a) the need to constantly reserve resources for all teams, even if they are not active; b) a more complex process of updating applications — they are not updated all at once, and you need to update applications one by one; c) a more complex mechanism for building federations, or SSO.

Going forward, I plan to migrate to Kubernetes with an in-house developed orchestration system, native support for multitenancy, and SSO using the Pama ID end-to-end authentication system.

Platform

At the core of Pama Team is an API platform that includes 21 REST controllers and 8 hubs that implement a real-time interaction model based on Signal R, and the total number of all public methods it provides is more than 170.

The platform implements 48 object models, but there are 11 objects that implement business entities (fig. 16):

fig. 16. Hierarchy of business objects of the platform

In order to be able to build deep analytics using machine learning algorithms, I added triggers to the system that log user actions with business objects. A significant plus of this approach is that all front-end applications do not generate additional traffic to the analytics systems, all analysis is done on the backend side.

The Pama Team platform is implemented on the cross-platform open source framework from Microsoft AspNetCore version 7.0, which can work efficiently on any popular operating system. Due to the fact that the framework is compiled for the necessary operating system, the consumption of RAM in the container is small — no more than 100 mb with all dependencies, and the initialization of the container takes no more than 70 milliseconds. For example, Java Spring Framework uses virtual machine JVM, because of which initialization of the container with the application takes more than 10 seconds, and the amount of RAM used can be 500mb.

Mobile apps

If you want to develop a modern mobile app for iOS, you need to be familiar with swift — a special programming language based on C and Objective C. In addition to the programming language, you need to know the main frameworks used in the app, as well as the API of the mobile platform.

To reach a wide user audience, an application for one iOS platform is not enough, you need an Android application, and this is a completely different programming language, other frameworks and other APIs.

But even if you were able to learn two programming languages, in order to implement new functionality, you will develop it twice in two different ways, and that’s t2m/2. In reality, t2m will be even lower due to platform features and differences.

The emergence of this situation has forced enterprising engineers to create cross-platform frameworks that allow you to develop a mobile application in the same language and frameworks, and then run them on two mobile platforms iOS and Android at the same time.

The cross-platform approach eliminates the need to know programming languages, frameworks and APIs of mobile platforms, which is very convenient.

Currently, there are 3 main cross-platform frameworks:

a. Xamarin (MAUI) from Microsoft

b. Flutter

c. React Native

Xamarin was chosen for several reasons:

1. Microsoft is a big player on the market, so the technology will exist for a long time on the market: there will be version updates and support

2. Since Xamarin has existed since 2011, there is a huge number of libraries and frameworks from third-party developers and companies for all cases of life

3. A large number of large projects have been implemented on Xamarin, and everything works great

4. Since Pama Team API platform is implemented on .NET, I can unify part of the code: from models, to various helpers, Microsoft Identity, and data encryption libraries

5. I was very familiar with Microsoft WPF and the XAML markup language used in Xamarin — this greatly lowered the threshold of entry into this technology

Xamarin comes in two flavors: Xamarin Native and Xamarin Forms. Xamarin Native is focused on deeper interaction with mobile platform APIs, and is great for creating native apps with UI interfaces of high complexity. If you want to create a native C# mobile app for Android that is not in any way inferior to kotlin, or java development, then you need to use Xamarin Android.

I was more suited to Xamarin Forms (fig. 17, see *12), as my main goal for myself was to maximize the over-utilization of business functionality with UI elements of medium complexity. There are several complex UI components in the application, for them I wrote special Renderers, which interact with the APIs of iOS and Android platforms at a low level.

fig. 17. Xamarin Forms architecture

With Shared Application Code, the developer gets maximum reuse of his code in mobile applications.

If we talk about the architecture of the mobile application, it is based on the MVVM pattern, which allowed me to reuse a lot of code, not only in the mobile application, but also in other applications: API Platform, Management Console, Pama Team Web application.

fig.18. Reusable components

Fig. 18 shows a model for reusing the source code and components of mobile applications, web applications, and the API platform.

Due to the fact that all solutions use .Net (Core and Standard) and the MVVM pattern, it is possible to distinguish several layers that are unified between the solutions:

1. Common libraries

Encryption, authentication, working with different formats (XML, JSON). Over-utilization rate is more than 50%

2. Object models

More than 80% of all data objects are unified, 20% reflect platform specifics and differences in business functionality.

3. Viewmodels

Viewmodels created for the main product — mobile application are partially used for the web application as well.

The over-utilization rate between iOS and Android apps is as high as 95%, and between mobile and web apps it is over 30%.

In my estimation, based on the time spent on MVP functionality development and the over-utilization pattern resulting from the correct choice of the technology stack, I saved at least 50% of development time by building my solutions on .NET Core/Standart and a single MVVM pattern.

Web application technologies

I want to linger a bit on web application development technologies, of which there are several in the Pama Team:

1. Pama Team’s main client web application

2. Application — wizard for creating new teams

3. Platform management console

All these applications are built on Microsoft’s modern and very user-friendly Blazor framework: the main web application uses Blazor WASM (fig 19b), the others are built on Blazor Server (fig 19a).

While Blazor Server and Blazor WASM may have minimal differences at the application source code level, the technologies themselves are very different:

a. Blazor Server — the application is rendered on the server side and changes are reproduced on the client using the Signal R library. This technology is great for simple interfaces and supports even older versions of IE. The ideal use is for administrative panels. Disadvantage — high load on the server if the application has many users

c. Blazor WASM is a classic web application that uses Web Assembly. The disadvantages include the fact that the browser must support WASM and the fact that the Blazor JS library weighs a lot, negatively affecting the speed of the initial page load. The pros clearly outweigh: you can reuse a lot of C# code, use convenient markup and components almost without writing JS code.

As for Blazor technology in general, despite the apparent youth of the technology it has a very large community, and I found all the components I needed, as well as answers to questions that arose during development.

I can even note that the number of modern open source components/libraries/frameworks is much larger than Xamarin. I assume this difference is due to Xamarin’s focus on building and maintaining Enterprise grade apps.

Blazor and Xamarin are excellent technologies for full-stack development, which are suitable not only for startups, but also for large Enterprise, because they have excellent support, a wide range of additional frameworks and libraries, including advanced monitoring, security, tracing, and, of course, DevOps.

Control console and monitoring elements

fig.20. Pama Console App

As I mentioned earlier, it’s built on the Blazor Server framework, and “under the hood” it uses the DevOps API of the Pama Team platform to get the status of containers and run CD/CDP pipelines. In addition to managing the process cycle of the team’s environment, the console has mass operations — for example, updating all containers to the latest version, which is very convenient — I haven’t done anything manually on the server in a long time.

At the moment, the disadvantages include a long process of container initialization due to additional waiting intervals that I had to add to avoid unavailability of one of the services in the initialization and configuration chain. In the future I plan to reduce this time from a few seconds to hundreds of milliseconds.

Hardware and capacity estimation

MVP Pama Team’s work in production environment is provided by HP Proliant bare metal server with 48 Xeon cores, 256 GB RAM on Ubuntu 20.04 operating system.

The server has a non-standard disk configuration based on 2Tb ioDrive2 Duo, which provides the highest database performance.

In order for ioDrive2 to work correctly on Ubuntu 20, compilation of drivers for the required version of the operating system was required.

An important task for any technical support is to predict server capacity, i.e. how many application instances the current server configuration can support when scaled horizontally.

To make this estimate, it is not enough to divide the total amount of memory by the amount of memory consumption of one application (instance) — this will give an upper estimate, but the accuracy in real conditions will differ many times. That’s why I created 50 teams test spaces and on all of them simultaneously ran the most loaded scenario — PBR ceremony, when both REST and real-time controllers with Signal R are involved.

As a result of the test I got the following results: maximum CPU utilization reached 25% and RAM utilization 12%.

Based on this data we can make more accurate predictions: the current server configuration is capable of providing 200 teams with PBR on CPU. Of course, for the passive mode, which involves working with the backlog — where memory capacities need to be evaluated — the estimate is more optimistic: 400 teams.

Challenges and constraints

Due to the fact that my mode of operation was subject to significant time and resource constraints, I had to deliberately reduce the scope of automation and infrastructure work. As such, I have been given the following constraints:

1. I only have dev and production environments, no integration environments

How so? — you may ask.

Assuming that I was developing all applications in a single environment, my Mac was the perfect integration environment for me: I could run backend and mobile applications in debug mode, testing end-to-end user scenarios. I think any developer dreams of such an approach.

2. Lack of autotest coverage

According to the statistics, my applications are covered by unit and autotests by 12%, which is not very much.

Manual testing helped me to compensate for the lack of tests — I tried to pass user scenarios constantly when I had free time, thanks to which I achieved acceptable product quality.

Expectedly, most of the defects were concentrated in the PamaSync-based ceremony modules and I spent dozens of hours testing and fixing defects in these sections.

3. Due to the limitations of the mobile app’s interaction with Signal R hub over https, I had to add another layer of encryption to JSON

I was not able to solve the problem of integrating the mobile app with Signal R hub over https, so the API platform supports working over http and https. To ensure security in the transport layer, I implemented additional encryption of the transmitted data at the application layer.

4. Lack of multitenancy

Due to the architectural features of MVP, a dedicated set of applications is created for each team — as I said earlier, this is not a bad thing, but it has a number of drawbacks in terms of efficient horizontal scaling, ease of management, and unification of common functions.

Moving to multitenancy for Pama Team means using a single database, an efficient horizontal scaling scheme for a universal backend application, and a common end-to-end authentication service, Pama ID, which will allow sharing objects not only within but also between different teams.

Conclusion

I admit — this is not my first, and not even my 5th realized project, but it is a landmark for another reason — it is the first one created in the paradigm of product design (design thinking + lean ux + agile), in which the formation and subsequent validation of hypotheses is logical and elegant what is important — it is based on a scientific empirical approach, which should not only question new assumptions, but also offer a methodology for their verification using measurable indicators.

In addition to testing the product hypotheses, throughout the development of this product I have been testing another one.

There were moments when I felt as if I was both pilot and navigator of a rally car at high speed entering a new corner, and I can say that it would be incredibly interesting to take a new lap at an already higher speed.

Having passed this way myself, I can confidently say that due to the right choice of implementation tools, deep immersion in business specifics, a good purposeful (fig. 21) engineer is able to repeat it after me, because all the work done on the product is rarely associated with some unique gift — they are a consequence of technological savvy: we know which architecture patterns are better, how to make the UX better, how the UI of the best applications look like, we can predict the behavior of users in this or that situation.

fig. 21. Contribution 20–23

--

--