Gathering data on Singapore Government Apps

Dave Quah
Government Digital Services, Singapore
7 min readMar 1, 2019

In the Singapore Government, every year or on an ad-hoc basis, department X sends out Excel spreadsheet poll to agencies to gather information about their mobile apps. This “Whole of Government” information is then used to implement policies.

This process is very ancient and manual, and sometimes the information being asked for is strange. Let me elaborate on this..

The problem

On one end, we have department X (poller) that has to draft up X number of Excel spreadsheets for X number of agencies. Once ready, emails are blasted out to all agencies, constant chasing and reminders, and finally comes the Excel compilation nightmare when emails start flowing in.

On the other end, we have agency employees (pollees) who are busy with their daily work, who now have to find time to reply these polls. They may not have the information off the top of their head, so they email, chase and remind others within the agency.

And just to test the system out, I have requested from department X for the most updated list and did a little investigation. Turns out there is only 70+ records (we definitely have way more than that), duplicate entries, some apps no longer exist, and weird questions being asked — why the heck are we asking if blackberry platform is supported?

To summarise:

  • Time consuming work for poller and pollee
  • Is data accurate and timely enough?
  • Are we asking for the right metrics?

So with all these in mind, I have decided to do a quick prototype during my free time, for about a week, to see if I could address these issues.

Metric definition

I started by reading up about what is information could be used to determine an app’s performance and looked at the information that pollers ask. There is definitely information that I cannot provide such as target audience or app costing, but for a start I went ahead with the following:

  • App name
  • Agency name (determine which agency owns the app)
  • App description (understand the purpose of the app)
  • App ratings (customer satisfaction score)
  • Number of downloads (kind of determines usage)

Query testing

Once I have determined the metrics, I had to test to see what can I retrieve from both App store and Play store.

For App store, I found an iTunes API that I could call. It took awhile to get the query right, but the real problem was that it was limited to approximately 20 calls per minute. I would have to upgrade to Apple’s Enterprise plan to bypass the limit, but being the cheapskate that I am, I worked around it by throttling my calls in intervals of about 4–5 seconds :)

As for the Play store, I could not find any available API, but since the Google Play store is on the web, I wrote a simple web crawler to extract the information I need.

Prototype strategy

This is a simplistic and probably not the best strategy, but it seemed alright as a starting point.

  1. Generate keywords for query
  2. Query for developer account ids and filter
  3. Query for mobile apps information from whitelist
  4. Visualisation
  5. Automation
So prototype that even my images are prototype

1. Generate keywords for query

First, I create a list of 100+ keywords — words like the full name of agencies, “gov.sg”, “Singapore government” and so on. These will be my search terms to both stores to retrieve a list of developer account ids.

2. Query for developer account ids and filter

Next, based on the list of developer account ids, I filtered through and “whitelisted” Singapore government accounts by putting them in a whitelist, and unrelated accounts to a blacklist. The purpose of a blacklist is actually to identify new developer account ids when I run the query again (new accounts here are basically accounts that do not exist in both whitelist and blacklist). There are of course some exceptions to this approach, for instance, some apps such as ActiveSG which belongs to Sports Singapore is actually under developer account iAPPS PTE LTD, but these exceptions are not hard to deal with — just add to the whitelist as an app id rather than developer account id.

3. Query for mobile apps information from whitelist

Once I have my whitelists ready for both stores, I use the iTunes API and web crawler on Play store to get the information I need. After a few rounds of refining my queries and whitelists, I ended up with something like this.

Whitelist query results for Singapore Government app store apps

4. Visualisation

I dumped the results into an AWS S3 bucket and hooked it up to a table component with search and sort features, displayed some basic data, and now I can answer some basic questions like “What is the best performing app in the government?” or “How many apps does govtech own?”.

Table displaying merged data collected

A point to note, the merged data here is based on App name which is probably not ideal. If apps are named differently on both stores, then they are treated as separate apps.

5. Automation

To automate this, I can use an AWS Lambda, schedule whitelist querying to be a weekly job, and pipe the information to a data storage.

Does this solve the problem?

In my opinion, this only solves part of the problem. This prototpe may have addressed the time consuming part of things, and maybe a little on data accuracy and timeliness, and perhaps getting some of the right metrics but more work needs to be done (see improvements).

It can provide potential useful information such as app ratings by current version (for iOS), or minimum version supported, but it lacks other information such target audience or officer-in-charge which department X may require.

What I think is interesting though, is the other potential opportunities it opens up. Assuming we have an accurate “Whole of Government” apps dataset opened freely for anyone to view, this gives a lot more visibility to agencies and our leaders.

Agencies can compare how their mobile app ratings fare against others of similar nature, discover similar apps from other agencies which could either lead to a collaboration effort to learn from one another, or maybe even consolidation into a single app.

Leaders are able to make better informed decisions on our mobile app offerings — implement better policies? perhaps eliminating apps that have little to no usage, or apps that are just not useful at all.

Ultimately, with proper data, we will be able to improve our current mobile app offerings, reduce citizen app fatigue, and free up resources that can be allocated to other areas of higher importance.

Improvements & Limitations

There is definitely a lot more that can be done.

1. Strategy is still rather manual

Searching by keywords and filtering developer account ids works pretty well for the App store as there are typically not many new developer account ids discovered, but Play store returns me thousands of false positives. It would be ridiculous if someone had to manually filter through these every week or month. This might not matter much since agencies do not change their developer account ids and new apps typically fall under the same account, but it would be great if the filtering could be automated as well.

2. Some apps might have been missed

Technically, by increasing the search keywords, flagging out and filtering, the list will get better over time. But because the filtering process is still manual, a better alternative might just be getting feedback from agencies when they cannot find their app.

3. Near real time data is not possible

Since I am getting this data from App store and Play store on the web, the data update frequency is tied to how often Apple or Google updates their data on the web. If real time data is needed, then this method will not work.

4. Currently there is no past data analysis

Assuming both stores are updated daily, this is doable as long as the AWS Lambda queries the stores on a frequent basis. It would be interesting if there is a “week on week” or “month on month” comparison to see how an app’s ratings has changed over time. Perhaps this could be a measurement criteria for success of an app build release?

5. Currently user comments are not pulled

This is also doable, but to make it even better, why not apply topic and sentiments analysis on user comments? It would be interesting if user comments could be grouped into topics like “Speed” (based on sentences like “speed is slow”, “fast loading app”, etc) and show how much it has affected app ratings.

End

Well, that is it from me. This is my first post on Medium, I will greatly appreciate it if you can give me some feedback, whether it has to do with writing or the usefulness of the prototype, thanks!

Source code at:

--

--

Dave Quah
Government Digital Services, Singapore

Software Engineer @ GDS, GovTech Singapore. I think to help people, it’s better to be honest than nice. https://milleus.github.io