Sitemap
INA Digital Edu

Building technologies to create irreversible transformation in improving Indonesia’s education system.

Building a Scalable Testing Infrastructure: GovTech Edu’s Use of Cloud Emulator for Macrobenchmark Test

18 min readDec 23, 2024

--

Writers: Hamnah Suhaeri, Anjas M Bangun, Wahid Nur Rohman

TLDR

In the Super-app Android, an issue has been identified where the application sometimes freezes and requires a restart to function normally, a problem not detected by current functional testing. Performance degradation, such as sudden frozen frames, became evident only after production deployment. To address this, further research is necessary to find a more effective tool for measuring and monitoring app metrics and preventing crashes and freezes.

The goal is to ensure thorough testing before the app is released to real users. Given budget constraints and the inability to establish a device farm, we are utilizing existing resources and focusing on observing trends rather than exact data numbers. The tool is currently in staging and targets the production environment, with acceptable use of both device emulators and real devices to compare metrics before and after implementation.

Prologue

As a large organization supporting the government in developing its digital products, GovTech Edu provides solutions to simplify access to education through its digital platforms. One of the digital products developed by GovTech Edu is the Android Super Apps “Merdeka Mengajar,” which can be accessed via Android OS.

It is essential that any application released to the public functions optimally in accordance with the agreements outlined in the Business Requirements Document (BRD) and Technical Requirements Document (TRD). The development team is responsible for testing the application to meet these expectations. Various tests are conducted to ensure reliability and proper functionality, including functional tests, regression tests, API tests, unit tests, and more. However, another crucial aspect of testing is conducting a benchmark test for the app.

The purpose of conducting an app benchmark test is to ensure the application performs well and to obtain an initial report before it is released to the public, allowing for any necessary improvements. The results from the benchmark test can be observed to gain initial analysis and identify early errors, such as app crashes or freezes. These observations can serve as a foundational reference for application improvements and inform management’s decision on whether to proceed with the release, ensuring that users do not receive a defective product and maximizing user satisfaction.

The output artifact in the form of a raw report from the execution of the Macrobenchmark test will be processed and data extraction will be performed to generate a new summary report in .json file format. This summary report .json file will be stored as historical data in Google Sheets. This historical data is ready to be utilized by Looker Studio for data visualization. The use of Google Sheets and Looker Studio aims to maximize the resources available to Govtech while still delivering the highest quality results. Google Sheets provides ease of access and collaboration, while Looker Studio offers advanced and user-friendly visualization capabilities.

By executing Macrobenchmark Testing in our customized cloud emulator environment, we can efficiently and thoroughly identify application performance metrics before the application is deployed to users. Additionally, we can pinpoint specific screens prone to frequent issues. When performance issues arise, the exact occurrence time and historical data can be tracked, facilitating debugging and determining the most effective resolution. This practice enhances application development quality and delivers a better user experience.

What is a Macrobenchmark Testing?

Benchmark testing is a method used to evaluate the performance of an application. Macrobenchmark testing, as a specialized type of benchmark testing, focuses on assessing the performance of an app during real-world usage scenarios.

Macrobenchmark tests are designed to evaluate the app’s performance in terms of user interaction and overall experience, rather than focusing solely on individual functions or components. The key differences between Macrobenchmark testing and standard benchmark testing are summarized below:

Macrobenchmark tests simulate real user actions to assess how the app performs under typical use cases, including:

  • Startup Time: Measuring how long it takes for the app to start and become usable.
  • UI Performance: Evaluating the responsiveness and smoothness of user interactions.
  • Resource Usage: Observing the app’s impact on system resources like memory and CPU during common tasks.

The goal of Macrobenchmark testing is to ensure that the app performs well in realistic scenarios and provides a satisfactory user experience. This type of testing helps identify performance issues that may not be apparent in isolated functional tests, allowing developers to optimize their applications for better efficiency and responsiveness.

What is a Cloud Device Emulator?

The Cloud Device Emulator is a tool used to simulate and test Android Automotive (a base Android operating system) applications within a virtual environment hosted in the cloud. It enables developers to run their apps on virtual devices without requiring physical hardware, which is particularly useful for testing various scenarios and configurations.

Developers can create their own Cloud Device Emulator using cloud services such as Google Cloud Platform (GCP) on their own virtual machines. This approach eliminates the need for subscriptions to specific providers. Essentially, developers have the flexibility to build and manage their own emulation environments, facilitating effective application testing and development. Additionally, by building an Android device emulator on a custom virtual machine, we can reduce costs by avoiding the need to keep the VM running continuously. We can schedule the VM or device to power on only as needed, thereby minimizing operational expenses.

At GovTech Edu, we conduct the Macrobenchmark testing by setting up a custom cloud emulator on our own.

Why do we set up our own cloud device emulator for Macrobenchmark testing?

Conducting app benchmarks comes with numerous challenges, including the lack of a permanent storage location for physical devices and the need for benchmark test behaviour to accurately replicate real-world user conditions. These obstacles can hinder effective testing and limit the ability to ensure optimal app performance.

There are several solutions to perform Macrobencmark testing, including the use of a third-party tool. After extensive trials and explorations, we chose to implement Macrobenchmark testing in combination with a custom-built Cloud Device Emulator. This approach offers several benefits such as,

  • Customizability and Accuracy: Tailor the environment to suit specific testing requirements, closely mimicking real-world scenarios that we envision.
  • Cost Efficiency: Eliminate additional subscription costs and allows us to control operational expenses on-demand.
  • Scalability and Flexibility: Can be easily adapted to accommodate varying app testing scenarios, configurations, and workloads, providing the team with the flexibility to test across multiple use cases.

Cloud Android Emulator Architecture Design

A web-based Android emulator uses the official Google repository as the source code to build the emulator with support for Kernel-based Virtual Machine (KVM) and Android Virtual Device (AVD). The use of this web-based Android client aims to simplify the debugging process for developers when issues or challenges arise during the execution of macrobenchmark tests. Below is the architecture diagram of the Cloud AVD.

Source: https://source.android.com/

The cloud server hosting the Android emulator contains an AVD pre-installed within the emulator. The `android-emulator-webrtc` application, developed using React.js, serves as the interface to display the emulator’s UI in a web service. It also captures user interactions, which are then sent to the server as commands. Communication between the Android emulator and `android-emulator-webrtc` is facilitated by the Linux application `Goldfish-webrtc-bridge`. Before a user can interact with the Android emulator on the server via a client browser, data traffic is managed by Envoy, which redirects HTTP traffic from port 80 to HTTPS on port 443, provides a gRPC proxy, verifies access tokens, handles self-signed HTTPS certificates, and redirects requests to Nginx, which hosts the React application. Instead of using JWT service and Turn Server for security, internal proxy access through SSH.

In this web service-based Android emulator, QEMU, a type 2 hypervisor, is utilized (building on the type 1 hypervisor — KVM, which has been automatically installed on Linux since version 2). QEMU enables the use of KVM features that allow direct access to hardware with the added advantage of a user interface (non-headless). QEMU operates similarly to typical virtual machines, where the application runs on top of its OS (Android, which in this case also runs within a Docker Compose container). In addition to the hypervisor, activation of nested virtualization on Google Compute Engine is required to allow multiple virtual machines to run concurrently within the same virtual machine

Macrobenchmark Test Execution Flow Diagram

The macrobenchmark testing system using the cloud Android emulator begins by building all the necessary images using `docker-emu-builder`. These required images include: Envoy, Nginx, Nvidia OpenGL, and Android Emulator. All these images are stored in a VM, ready to be deployed as Docker containers. The image build process is executed only once when the VM is initially created.

The pipeline scheduler in GitLab triggers the pipeline by executing several jobs. The first job in the pipeline is `start-vm`. In this job, a container on Kubernetes executes the `gcloud` command to start the VM via a service account. Once this job is completed, the next stage is the `adjustment emulator device` job. This job sequentially executes several critical commands to launch the containers using the images stored in the VM. These essential commands include: deleting all old containers and ADB keys (to avoid unexpected errors), stopping the ADB server, generating new ADB keys, setting environment variables, launching new containers via Docker Compose, connecting the emulator to a specific port, and ensuring that the Android emulator container is ready for use.

Once the Android emulator container is ready, the next job executes the preprocessing commands to prepare for the macrobenchmark test. After all the initial requirements in the preprocessing phase are executed, the macrobenchmark test is run. Upon completion of the macrobenchmark execution, the `stop-vm` job is executed in parallel with the job that processes the raw output report data into a summary report that can be utilized by subsequent reporting jobs. The final output of the macrobenchmark test report, in its simplified form, is sent to Slack for developers to review the results. A more detailed summary report is sent and stored in Google Sheets for historical data storage. This historical data in Google Sheets is auto-synced for data visualization in the form of graphs in Looker Studio.

How GovTech Utilizes Cloud Device Emulator for Macrobenchmark Testing

The entire macrobenchmark test execution leverages cloud infrastructure for Android emulators using Google Cloud Platform (GCP), while the CI/CD process is managed through GitLab. Terraform is employed to facilitate the setup and configuration of the infrastructure. Ansible is used to streamline the VM preconditions related to software setup, configuration management, and the deployment of necessary applications. Docker is utilized to package and run the Android emulator application within an isolated environment on the VM. Below is the code snippet for building the bootable disk with the custom image.

resource "google_compute_disk" "ubuntu_2004_bootable_disk" {
name = "ubuntu-2004-bootable-disk"
project = module.project.project_id
type = "pd-balanced"
zone = "${YOUR_ZONE}"
image = "ubuntu-os-cloud/ubuntu-2004-lts"
size = 100
description = "Disk for custom images"
labels = {
environment = "staging"
}
physical_block_size_bytes = 4096
}

resource "google_compute_image" "ubuntu_2004_vmx_enabled" {
name = "ubuntu-2004-vmx-enabled"
project = module.project.project_id
family = "qa-core-staging"
source_disk = google_compute_disk.ubuntu_2004_bootable_disk.self_link
storage_locations = ["${YOUR_ZONE}"]
}

After creating the custom image, the next step is to create an instance or virtual machine for the Android emulator. The VM must enable HTTP and HTTPS traffic and utilize the custom image on the boot disk. Below is the Terraform code snippet:

module "android_vm" {
source = "git::${YOUR_PROJECT_MODULE}"

project = module.project.project_id
environment = "staging"

name = "android-vm"
machine_type = "n2-standard-16"
zone = "${YOUR_ZONE}"

deletion_protection = true

disk_image = "ubuntu-2004-vmx-enabled"
disk_size = 256
disk_type = "pd-balanced"

metadata = {
enable-oslogin = "true"
}

service_account = google_service_account.qa-core-staging.email
scopes = ["cloud-platform"]

allow_stopping_for_update = true
depends_on = [
google_compute_resource_policy.runner_start_stop_daily,
module.iam_sa_default_account
]
resource_policies = [google_compute_resource_policy.runner_start_stop_daily.self_link]

labels = {
env = "staging"
purpose = "android-vm"
goog-ec-src = "vm_add-tf"
}

tags = ["android-vm", "http-server", "https-server"]
}

We also added a VM shutdown schedule to optimize resource usage, ensuring that the VM is powered off if it remains on outside of working hours.

resource "google_compute_resource_policy" "runner_start_stop_daily" {
name = "runner-start-stop-daily"
region = "${YOUR_REGION}"
project = module.project.project_id
description = "Force stop instance in qa-core-staging project"
instance_schedule_policy {
vm_stop_schedule {
# Sunday - Friday, 17.00 WIB
schedule = "0 17 * * 1-5"
}
time_zone = "${YOUR_TIME_ZONE}"
}

lifecycle {
create_before_destroy = true
}
}

Apply the Terraform configuration to create the VM according to the specified setup.

Once the VM is created, we can install the dependencies and configure all the necessary components on the VM using Ansible. The required dependencies include curl, software-properties-common, android-sdk, Android command line tools, qemu-kvm, libvirt-daemon-system, libvirt-clients, bridge-utils, cpu-checker, unzip, wget, Node.js, npm, Java 17, Python 3.12, Docker, and GitLab Runner.

In addition to the dependencies, we need to set up the docker-emu builder using the following Ansible code snippet:

### Using docker-emu:
- name: Cloning android emulator container script
git:
repo: https://github.com/google/android-emulator-container-scripts.git
dest: /home/gitlab-runner/android-emulator-container-scripts

- name: make android container script RWX mode for all users
file:
path: /home/gitlab-runner/android-emulator-container-scripts
mode: 0777
state: directory
recurse: yes

- name: Downloading android system image
get_url:
url: https://dl.google.com/android/repository/sys-img/{{ android_system_image_version }}.zip
dest: /home/gitlab-runner/android-emulator-container-scripts
mode: '0755'

- name: Downloading android emulator
get_url:
url: https://dl.google.com/android/repository/{{ android_emulator_version }}.zip
dest: /home/gitlab-runner/android-emulator-container-scripts
mode: '0755'

- name: Installing SDK packages
shell: |
yes | {{ android_sdkmanager }}/sdkmanager --licenses \
&& yes | {{ android_sdkmanager }}/sdkmanager "build-tools;{{ android_sdk_version }}" \
&& yes | {{ android_sdkmanager }}/sdkmanager "platforms;android-{{ android_platform_version }}" \
&& rm -rf /usr/lib/android-sdk/build-tools/27* \
args:
executable: /bin/bash

- name: Building emulator images with emu-docker
shell: |
cd /home/gitlab-runner/android-emulator-container-scripts/ \
&& source ./configure.sh >> /dev/null 2>&1 \
&& emu-docker create {{ android_emulator_version }}.zip `echo {{ android_system_image_version }} | sed 's@.*/@@'`.zip \
&& ./create_web_container.sh \
args:
executable: /bin/bash

Ansible execution is only performed once during the initial VM creation stage. After all VM requirements are set up, the next step is to create a pipeline configuration that will be executed through the pipeline scheduler in GitLab. These jobs include starting the Android emulator VM, adjusting the emulator’s resources, executing the macrobenchmark test, performing the reporting process, and shutting down the VM to optimize resource usage. The pipeline scheme that will be executed via GitLab can be seen in the following image:

The start-android-vm job in GitLab is used to send a command to GCP to start a specific VM through the Google Cloud Console command. Below is the code snippet related to this job:

start-android-vm:
image: "gcr.io/google.com/cloudsdktool/google-cloud-cli:459.0.0-alpine"
extends:
- .macrobenchmark-rules
stage: before-ci
script:
- export GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS_WARTEST}
- gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS
- gcloud compute instances start android-vm --project=${YOUR_PROJECT_NAME} --zone=${YOUR_VM_ZONE}

The adjust-emu-resource job is responsible for reconfiguring the Android emulator container that will be run. This configuration can include the emulator’s RAM size, CPU size, ADB key generator (for emulator authorization), and emulator profile. This job also includes a script designed to wait for and ensure that the Android emulator is fully started and ready for use. Below is the code snippet related to this job:

adjust-emu-resource:
tags:
- android-vm
stage: before-test
extends:
- .macrobenchmark-rules
before_script:
- cat /etc/bash.bashrc | grep 'export' | awk '/=/ {print $0}' > ~/systemd_profile && source ~/systemd_profile
- docker stop $(docker ps -aq) && docker rm $(docker ps -aq)
- rm -rf ~/.android
- adb kill-server
- mkdir -p ~/.android
- adb keygen adbkey
- mv adbkey* ~/.android
- adb start-server
script:
- cd /home/gitlab-runner/android-emulator-container-scripts
- echo "Setup emulator with CPU Size=$CPU_SIZE"
- echo "Setup emulator with EMULATOR_PARAMS=$EMULATOR_PARAMS"
- docker-compose -f js/docker/docker-compose-build.yaml -f js/docker/development.yaml up -d
- docker ps -a
- adb connect localhost:5555
- |
timeout=300
start_time=$(date +%s)
while [ "$(adb shell getprop sys.boot_completed | tr -d '\r')" != "1" ]; do
current_time=$(date +%s)
elapsed_time=$((current_time - start_time))
if [ "$elapsed_time" -ge 150 ]; then
echo "Timeout reached 150 seconds. Restarting adb server..."
adb kill-server
adb start-server
fi
if [ "$elapsed_time" -ge "$timeout" ]; then
echo "Exceed timeout. Exiting..."
adb logcat
exit 1
fi
echo "Still waiting for boot.."
sleep 5
done

The execute-macrobenchmark job is responsible for running the macrobenchmark test itself. This job also includes a macrobenchmark preprocessor that initializes the test requirements, such as file signing and token generation for the user test, as we use a credential injection approach. Below is the code snippet related to this job:

execute-macrobenchmark:
image: $IMAGE_URL
timeout: 2h
tags:
- android-vm
stage: test
extends:
- .caching
- .macrobenchmark-rules
before_script:
- ${OUR_PRECONDITION}
script:
- mkdir macrobenchmark_report
- export ANDROID_HOME=/usr/lib/android-sdk
- adb devices
- ./gradlew :macrobenchmark:connectedCheck -Pbc.GURU_TOKEN=$MACROBENCHMARK_GURU_TOKEN -PsigningTypeServerEnvironment=production -Pandroid.testInstrumentationRunnerArguments.androidx.benchmark.enabledRules=Macrobenchmark |& tee -a macrobenchmark_report/macrobenchmark.log
after_script:
- mv modules/test/macrobenchmark/build/reports/androidTests/connected/benchmark macrobenchmark_report/html_report
- mv "$(find . -path '*/modules/test/macrobenchmark/build/outputs/connected_android_test_additional_output/benchmark/connected/*' -type d | tail -n 1)" macrobenchmark_report/build_report
allow_failure: true
artifacts:
paths:
- macrobenchmark_report
expire_in: 3 days
when: always

The send-macrobenchmark-slack-report job is designed to process the raw report output from the macrobenchmark test execution into a summary format. The send-macrobenchmark-metric-report job processes the raw report output and automatically saves it to Google Sheets as historical data. This data in Google Sheets is then used by Looker Studio for auto-synced data visualization. The stop-android-vm job is responsible for shutting down the VM via a GCP command. Below is the code snippet related to this job:

stop-android-vm:
image: "gcr.io/google.com/cloudsdktool/google-cloud-cli:459.0.0-alpine"
extends:
- .macrobenchmark-rules
stage: after-qa-test
when: always
tags:
- himem
script:
- export GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS_WARTEST}
- gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS
- gcloud compute instances stop android-vm --project=${YOUR_PROJECT_NAME} --zone=${YOUR_ZONE}

In simple ways, the entire VM setup and test execution process can be illustrated in the brief workflow below:

How Our team works with this Macrobenchmark Testing System

Test Report

There are three types of reports used to present the results of the Macrobenchmark test execution:

A. Slack Report

The Slack report is sent upon the completion of the Macrobenchmark test execution process. The test results will be dispatched to the #android-automation-report channel. There are two types of Slack reports generated from the test execution, as follows:

  • An error report for the execute-macrobenchmark job, caused by issues with executing script commands (technical constraints outside of the macrobenchmark test), such as the unavailability of an online device from the emulator, which prevents the test from being executed.
  • A report indicating that all script commands for the execute-macrobenchmark job were successfully executed, but one or more tests within the macrobenchmark feature failed or achieving a 100% success rate

B. HTML Report

The HTML report generated is the output from the macrobenchmark test build, which is provided directly and attached to the Slack report. The HTML report presents information including the success rate of the macrobenchmark test itself and detailed explanations of the macrobenchmark test error logs.

C. Looker Studio

The Looker dashboard is used to display data and graphs from historical macrobenchmark test results over time in detail. It is divided into two sections for presenting the data: Overall Summary and Each Test Case Summary.

Overall Summary

The Overall Summary aims to provide a comprehensive overview of the data over time, including metrics such as RAM and CPU usage, test duration, and the timing of the testing.

The Each Test Case Summary

The Each test case summary presents data on macrobenchmark test results, detailing each test feature with a primary focus on metrics such as p95, minimum, maximum, and median values. It includes data visualizations supported by filters for test type, test name, and the minimum, maximum, median, and p95 values.

Scripting Macrobenchmark Test

To start with, we chose to create a macrobenchmark that measures startup metrics and frame metrics by scroll. This approach allows us to capture crucial performance indicators directly impacting user experience. By focusing on startup metrics, we can evaluate how quickly our app launches and becomes interactive, which is often a user’s first impression of frame metrics, measured during scrolling, providing insights into the smoothness and responsiveness of the user interface. We implemented this test by simulating a user launching the app and performing a series of scroll actions on the home page. By analyzing these metrics, we can identify potential bottlenecks, optimize resource loading, and improve overall app performance. This macrobenchmark serves as a baseline for future comparisons, enabling us to track performance improvements or regressions as we develop and update our app.

Startup test

@LargeTest
@RunWith(Parameterized::class)
class StartupModeBenchmark(
private val startupMode: StartupMode
) {
@get:Rule
val benchmarkRule = MacrobenchmarkRule()


@Test
fun startup() = benchmarkRule.measureRepeated(
packageName = TARGET_PACKAGE,
metrics = listOf(StartupTimingMetric()),
startupMode = startupMode,
iterations = 5,
setupBlock = {
if (startupMode == StartupMode.COLD) killProcess()
pressHome()
}
) {
startActivityAndAllowNotifications()
device.wait(Until.gone(By.res("shimmer")), 10_000)
}


companion object {
@Parameterized.Parameters(name = "mode={0}")
@JvmStatic
fun parameters(): List<Array<Any>> = listOf(
StartupMode.COLD,
StartupMode.WARM,
StartupMode.HOT
).map { arrayOf(it) }
}
}

The code above evaluates app startup performance across three different modes: COLD, WARM, and HOT. The test measures the startup time with the `StartupTimingMetric()`, simulating real-world scenarios by killing the app process for cold starts and pressing the home button. It waits for the app’s shimmer effect to disappear before completing each iteration, providing a comprehensive look at how the app performs during startup in different states until the user can interact with the application.

Home scrolling test

@RunWith(AndroidJUnit4::class)
class HomeScrollingBenchmark { // Should we name it "fling" instead?
@get:Rule
val benchmarkRule = MacrobenchmarkRule()


@Test
fun testHomeScrolling() = benchmarkRule.measureRepeated(
packageName = TARGET_PACKAGE,
metrics = listOf(FrameTimingMetric()),
iterations = DEFAULT_ITERATIONS,
setupBlock = {
killProcess()
startWithClearTaskAndWait("https://guru.kemdikbud.go.id/")
skipOnboarding()
logInWithAlreadyOnDeviceAccount()
Thread.sleep(IDLE_FAST) // Seems 'waitForIdle' doesn't work for Compose
},
) {
device.wait(Until.findObject(By.text("Beranda")), ACTION_TIMEOUT)!!
repeat(3) {
device.swipe(
device.displayWidth / 2, device.displayHeight * 3 / 4,
device.displayWidth / 2, device.displayHeight * 1 / 4,
2,
)
Thread.sleep(FLING_DURATION)
device.swipe(
device.displayWidth / 2, device.displayHeight * 1 / 4,
device.displayWidth / 2, device.displayHeight * 3 / 4,
2,
)
Thread.sleep(FLING_DURATION)
}
}
}

The code above is designed to evaluate the smoothness and performance of scrolling interactions on the home screen, using `FrameTimingMetric()` to measure rendering times. The test simulates a real user experience by killing the app process, starting the app, skipping onboarding, and logging in. It then performs a series of vertical scroll gestures on the Home screen, measuring frame rendering metrics during these interactions.

Identify Issues from Metrics Report by Looker Dashboard

The macrobenchmarking process was developed to gather early feedback before releasing our apps into production. We follow a two-week release cycle and always conduct regression testing, which includes both end-to-end (E2E) automation tests and macrobenchmark performance tests. Passing both tests is essential for the release to be considered eligible.

With sufficient data in our test reports, we can determine whether the new candidate release has stable, improved, or degraded performance. For example, if we notice a slight decline in startup time, we investigate by comparing the code changes between the release candidate and the previous release. This involves analyzing the startup code locally to identify any new processes that may have been introduced. Similarly, if we observe slower home scrolling performance, we compare the home code to see if any changes were made.

By following this procedure, we can proactively address potential performance issues and prevent them from affecting the production release.

Summary

Macrobenchmark tests are invaluable tools for us to monitor and investigate performance issues in critical areas like app startup and UI scrolling. By regularly running these benchmarks that are integrated into our pipeline, we can identify any performance regressions in startup time or frame rendering, allowing us to maintain optimal app performance.

Having historical graphs that track changes in these metrics helps quickly pinpoint when a significant slowdown occurred, enabling developers to easily trace it back to the specific code merged at that time for efficient debugging and resolution. These benchmarks act as an early warning system, ensuring that performance remains a priority in every update.

Incorporating Macrobenchmark testing into our Software Development Life Cycle reinforces our commitment to consistently delivering a seamless and user-focused app experience, serving millions of teachers and hundreds of thousands of school headmasters and administrators nationwide.

About The Writers

Hamnah Suhaeri

Hamnah Suhaeri, She is involved in the field of engineering as a software engineer in test. With over 5 years of experience in the field of quality engineering, she has developed a keen interest in entering the managerial realm of Quality Engineering platforms or Core QA at GovTech Edu

Anjas M Bangun

He is a professional QA Core at GovTech Edu. He graduated with cum laude honors from Institut Teknologi Sepuluh Nopember (ITS) in 2019 and had won several robotics competitions. He began his career in the field of Information Technology after graduating from university and has gained more than five years of working experience with various multinational companies. Anjas has significantly contributed to GovTech Edu by developing numerous automation tools alongside his team. Additionally, he has served as a speaker at ISQA (the biggest QA organization in Indonesia).

Wahid Nur Rohman

Wahid is an Android Engineer at Govtech Edu, currently part of the core apps team. His work focuses on maintaining vital applications, ensuring their performance and stability, and developing app architecture. Wahid has had the opportunity to build the macrobenchmark test for PMM, which helps evaluate the application’s performance and identify areas for improvement.

--

--

INA Digital Edu
INA Digital Edu

Published in INA Digital Edu

Building technologies to create irreversible transformation in improving Indonesia’s education system.

INA Digital Edu
INA Digital Edu

Written by INA Digital Edu

Building technologies to create irreversible transformation in improving Indonesia's education system.

No responses yet