Talking Data for Decision Making with Our Full Stack Engineer

Illustration by Alfian Maulana Latief/Pulse Lab Jakarta

Whilst most users tend to interact with the savvy, front-end interface of a data innovation, Muhammad Rheza “Boge”, our fullstack engineer, knows first-hand that there’s more than what meets the eye. Boge is in his sixth year at the Lab and is the whiz behind many of our data visualisation tools. But a day at the Lab for him involves much more, such as data cleaning, data analysis and working with end users to come up with tools that are not just flashy, but actually meet their needs. We recently spoke with him to get his take on the emerging role of data science experts like himself in promoting the use of data for better decision making.

Q. For persons who might be unfamiliar with the role of a full stack engineer, how would you describe your work at the Lab?

Whenever I introduce myself as a full stack engineer, I often use a slightly different explanation depending on the audience. For persons in the data science field, they immediately understand the technicalities of working in the world of web, application and software development, but for others, words such as data visualisations and dashboards are usually relatable and ring a bell.

My skill set includes front-end and back-end coding abilities, and requires having an understanding of where they both connect. I also develop artificial intelligence (AI) models that are deployed across the development and humanitarian sector. In 2015, the National Citizen Feedback Dashboard for Enhanced Local Government Decision-Making was my first project with the Lab. Working with data from LAPOR! (Indonesia’s national complaints handling system) and Twitter, I was tasked with cleaning the datasets to ensure correctness and usability, as well as text mining analysis based on topics such as education, infrastructure and health.

More recently, I was involved in a project with the Indonesian Ministry of Foreign Affairs, which used artificial intelligence and natural language processing to analyse digital information received from a number of the Ministry’s global outposts. Similarly, my role involved image mining processing (to extract text and information from images) and data preprocessing and text mining (to filter out frequently used words and support classification). This led to the design of a data visualization tool that identifies important issues and trends that need to be prioritised for diplomatic engagement.

Q. Both of the projects you mentioned indicate the growing demand for data science in the public sector. How has this demand evolved especially throughout the pandemic?

A lot has changed and the pandemic has definitely amplified the importance of timely and accurate data for decision making on public policies. I remember at one point, much of the emphasis was on simply building an array of tools and dashboards, with not much thought going into what it takes to access the data, process the data and maintain them. This led to a surge in the number of tools and dashboards, absent the institutional capacity of people with the expertise needed to interpret and ground-truth the data insights.

An effective approach that we’ve adopted at the Lab is focused on strengthening the institutional capacity of government entities to undertake these tasks. So for example, with the analysis we did of the diplomatic communications, we worked side by side with the staff within the Ministry and relied on their domain expertise and knowledge at every stage of the project. Through this learning by doing approach, institutions can better understand existing needs, in addition to the resources (staff, tools, etc) needed to ensure the sustainability of the tools.

It’s true that the role of a full stack engineer was not always seen as a traditional public sector occupation, but with the growing demand and heightened importance of data for decision making, it’s worthwhile for the public sector to reimagine its workforce based on emerging and future needs.

Q. Data visualization tools have been useful in the pandemic, both for helping the public to make informed decisions and informing policymaking. What are some of the challenges in designing these tools?

On one hand it’s great to have all these tools, but their effectiveness will come under scrutiny whenever they’ve either stopped functioning because of a design choice or because the data being used becomes outdated. At the Lab, we’ve been looking at the availability of publicly reported data on COVID-19 cases in Indonesia. What we found interesting was that in May 2020, only 20 of the country’s 34 provinces had reporting sites where data was reliably accessible, and of the 523 districts, only 290 had up-to-date websites reporting on COVID-19.

As a full stack engineer trying to make use of the data at the time, I found that there were several inconsistencies in the format in which the data was presented, for example some were presented as images, others as PDFs, etc. For me, this means the demand for data science in the public sector also needs to be matched with appropriate standards, procedures and agreements. Data interoperability for instance is important to enhance the sharing of data between interacting electronic systems across districts, and data standardisation is important as it relates to classification, size, units, amongst others,

Q. How might emerging trends in the data science field influence the work in the development and humanitarian sector in the years ahead?

In the development and humanitarian sector, there’s definitely been more conversations about harnessing big data and real time analytics these days. However with the growing interest and explorations, we also have a duty to ensure we are safely and ethically employing these resources. This includes improving our capacities, keeping up with the latest developments, and actively engaging in conversation with diverse experts from government, academia, the private sector and civil society to identify potential risks and harms and create mitigating pathways.

Artificial intelligence has been trending for some time now, but the emergence of Industry 4.0 has placed emphasis on automating traditional practices, for example leveraging artificial intelligence to fast-track emergency response compared to traditional means that tend to be time-consuming. In addition, the Internet of Things or IoT continues to expand our ability to monitor and evaluate various aspects of our everyday lives in near real-time to improve our quality of life.

The adoption of both AI and IoT may be a positive indication of organizations transitioning into the Industry 4.0 category. However, the biggest challenge rests with the data and the developers. Good data may produce good output. With AI though, we have a huge responsibility beyond our technical skills to consider issues of ethics, inclusiveness and equality when developing data innovations.

Boge was interviewed by Dwayne Carruthers, the Lab’s Communication Manager

Pulse Lab Jakarta is grateful for the generous support from the Government of Australia

--

--

UN Global Pulse Asia Pacific
United Nations Global Pulse Asia Pacific

UN Global Pulse Asia Pacific is a regional hub that aims to drive data innovation and sustainable development to ensure that no one is left behind.