Test #4: Miami 211

Iterating on finding human services with Open Referral

Ernie Hsiung
CUTGroup Miami
6 min readFeb 12, 2017

--

Proctor Keisha with one of our 22 CUTGroup Miami testers for the evening.

For our fourth Civic User Testing Group (CUTGroup) test, we tested Miami211 (source code), an open-source directory and search engine for health and human services providers in Miami-Dade County that is currently in early development.

This application was the second of two planned CUTGroup Miami tests in partnership with Open Referral, iterating on the research done in CUTGroup test #3’s Ohana Web Search project. [Editor’s note: we’ve been a little out of order and plan on publishing the results of the third CUTGroup Miami test soon.]

Methodology

  • On January 24th, 2017, an e-mail was sent out to 592 people asking for availability.
  • Of the 51 people who responded, we sent a follow-up email on January 31st finalizing their availability.
  • The test was conducted Thursday, February 2, 2017, in person at CIC Miami coinciding with their weekly Venture Cafe Miami event.
  • 22 testers were paired with proctors who were given a proctor script which assigned them specific tasks to complete. Proctors also observed their interactions with Miami211, noted any choke points, and recorded their feedback.
  • These testers used their own devices, a combination of laptops, mobile phones and tablets.
  • Members of the prototype development team were present, acting as both observers and proctors. Incidentally, this was also the first CUTGroup Miami session where the developer was able to see residents give real-time feedback on the applications that have been created.

Our goals for the test

  • The primary aim of this test was to identify usability issues — that is, areas where Miami211’s interface was broken, unintuitive, or inefficient, such that users were not able to easily find the services they needed.
  • The secondary goal was to determine whether the taxonomical terms used in the Ohana API were too technical for ordinary users. Ohana features a detailed four-tier taxonomy with over 10,000 classifications for service providers. We wanted to know whether testers who are not familiar with the health and human services sector could use those designations to help find the services they needed.
  • The third goal was to identify issues with the data itself.

Note: Because the project is early in development, and was originally designed for an organization no longer directly affiliated with the Open Referral project, Miami211’s visual design is deliberately generic. Thus, this test was not designed to measure testers’ responses to visual design elements.

A screenshot of the Miami 211 landing page.

“What do you think the purpose of the site is?”

  • When we asked our testers what we thought the purpose of the site was, fourteen testers correctly surmised that the purpose of the site was to help people find health and human services in Miami-Dade. However only one of those users noticed that it helped “find services near you.
  • Two testers thought it was a general search engine for Miami-Dade.
  • Two testers thought its purpose was to help people find services offered by the County.
  • Two testers thought the site provided “assistance” or “help with something.”
  • One tester thought it was limited to “health information.”
  • One tester was not sure.

“What actions do you think you could perform on this page?”

  • Nineteen testers responded that they could enter a search.
  • Four thought they could click a “Browse by Category” link.
  • Three thought they could search by location, with one tester who thought this feature was unnecessary.

“Can you think of a service you need that this site might be able to help you find?”

This question was designed to allow testers to demonstrate how they would use the site (versus giving them a predetermined task to complete). Nineteen of the 22 testers answered “yes,” and we continued in this section.

“Show me how you would use this site to look for that service.”

On the “Home” page:

  • 12 of 19 began by typing in search terms.
  • Several searched using multiple keyword strings or natural language (e.g., “doctor for teens,” “volunteer meals on wheels”), which the Ohana API is not designed to process. As a result, searches returned zero results.
  • One user found the search field’s autosuggest/typeahead feature helpful.
  • One user, noting the “search by location” and “use my current location” options, commented that she would not allow the site to find her current location because “it creeps me out.”
  • 7 of 19 selected a link from the “Browse by Category” menu.
A screenshot of the Miami 211 Search results page.

On the “Search results” page:

  • One tester felt that the search returned “too many [results] to sift through.” (Notably, this tester was on a smartphone, and a filter feature that allows users to refine their search results was disabled on mobile.)
  • Two testers who has used the “Browse by Category” approach felt that the results were not what they expected given the name of the category. Both went back to the home page and began a new search instead.

“Did you find what you were looking for?”

Only 10 of 19 testers felt they had found what they were looking for. Despite this, the average rating across all 19 devices was “4” (easy), when asked to judge the ease of task from 1 (very difficult) to 5 (very easy.)

“How would you improve this process?”

  • Four testers suggested that the “Browse by Category” links should be a drill down menu.
  • Two testers suggested that the keyword search should allow partial matches.
  • Two testers noted incomplete data in the database or in the back-end.

Scenarios

We presented our testers with a variety of scenarios. The full report is available if you’d like to read more about the test results of these scenarios in more detail.

Prototype developer David talks with a CUTGroup Miami tester.

Usability problems to investigate

  • The most commonly reported issue by far was testers searching with complex strings or natural language instead of single keywords, which return zero results and contribute to a poor user experience.
  • Geolocation appeared to map to the IP address of a neighborhood-level network, often several miles away from the tester’s location. As a result, any distances in the search results were inaccurate.
  • jQuery-powered features did not work on iPads or mobile phones.
  • The Home page gave the user too many choices by default. Many users were drawn to the category menu and overlooked the search form when they first viewed the Home page. Many who did notice the search form did not notice the “search by address” feature, which might better serve as an advanced option.
  • A few testers struggled to understand the purpose or scope of the site.
  • Several found the “Refine by service(s)” checkbox behavior confusing, or felt that the list was too long to be helpful.

Full report

Next Steps

Once the CUTGroup test was completed, we updated the GitHub repository with all pertinent issues that represented the top challenges our CUTGroup testers faced.

Because this particular project was meant to be a proof of concept for Open Referral to illustrate the ability of a resource based API to quickly develop prototypes, work on this project will be shifted to volunteer resources which can be worked on by the Code for Miami volunteers during the Monday Open Hack events.

David James Knight helped out with the publication of this report.

--

--

Ernie Hsiung
CUTGroup Miami

CTO of WhereBy.Us, Code For Miami co-founder, web developer, 2015 Code For America Fellow alum, early 2000s funny-sad blogger.