Having conducted the last of our thirteen public user tests in November 2017, we wanted to summarize our thoughts and provide some suggestions on how other municipal and civic organizations can utilize user testing programs.
A Civic or Community “User Testing Group” will not solve the “quality versus quantity” testing debate.
Let’s get that out of the way first: there looks to be a debate about whether or not it’s more feasible to do fewer interviewees going in-depth with the subject matter, versus our method of interviewing massive amounts of people. A member of one municipality expressed concern that some individuals might use CUTGroups as an opportunity to unfairly influence the design of their website. Or they could just be sharing their opinions, depending on whether we are taking feedback as face value.
I don’t think we solved the quality versus quantity question, but it did expose the idea that a variety of options can be feasible, not only for the partners interacting with their constituents but also for the residents, as it offered the chance to humanize city staff.
You can do a surprising amount with “Zero-stack development,” versus building proprietary software.
When we started our project, we learned that Smart Chicago worked with Blue State Digital to build Patterns, an open-source software platform specifically built for Smart Chicago that acted as a Customer Relationship Management (CRM) tool but for residents. While Patterns was open source, the software was relatively difficult and time consuming for us to localize for a variety of reasons.
But infrastructure was needed for us to establish a working base quickly so we could hit the ground as fast as possible during our sixteen month program period. Fortunately, a variety of tools are available for little or no cost.
We had already been using Slack for team communications from our experiences in the tech industry and organizing other groups. We used Zapier for automating workflows, such as to take new entries from our website and enter them into a Google Spreadsheet, Twilio for integrating text messages, Github Pages for building static websites, Inbox by Zendesk for supporting in-bound communications, TypeForm for mailing lists, notifications, proctor instructions, and recording test results. We used TypeForm rather than Chicago’s form management software Wufoo due to its user interface and the positive experience Ernie had working with Typeform while he was a Fellow at Code for America. TypeForm also allowed the ability to save answers into CSV form; for a couple of these tests, Ernie was able to use a script to parse some of these responses in a more digestible format so whoever wrote the report could parse a particular resident’s answer more efficiently.
Overall, while a localized version of Patterns would have had great value for us, the alternative software stack we used worked fine for our needs.
Money is a great motivator to get people you wouldn’t usually get through user tests.
An inexpensive Facebook ad attracted a significant number of our initial testers, supplemented by face-to-face promotions at farmer’s markets and local events. We also had a considerable amount of testers apply through a free Craigslist ad post. And then on top of that, proctors were hired from the testing pool to interview additionally if necessary.
This is the back of the napkin math we used: we hired proctors at $20 per hour assigned to each tester to guide the subject through the test, and document the result of the test. Each test took thirty minutes on average, and after each test, we gave them $20 gift cards for their time. This is not counting staff and managers to handle things related to the event itself, the overhead of preparing the proctor instructions, program administration, and reporting of test results.
Once folks came through the door, there was an honest effort to help.
It was a welcome respite from the familiar “Miamians don’t care about their city” narrative you hear time and time again.
Once those testers got in the door, most testers seemed interested in just having the opportunity to provide input to civic applications. We noticed that we had regulars — folks that signed up for CUTGroup test without fail. Even more surprising was a couple of those regulars were already Miami-Dade County employees themselves, although from other groups. Therefore, municipalities might be able to solicit feedback from their citizens without offering any monetary incentive, though we suspect it does induce more individuals to show up for a test.
Fail Fast (Both in a process, and the tests themselves.)
Conducting an average of one test per month gave us an opportunity to quickly apply what we learned to deliver more value to our partners and the experience for our testers. Not all of our tests went perfectly — we had a couple of tests where only two or three participants showed up, and the proctors came unprepared. While that particular session “failed” in isolation, we were able to use that as a learning moment and product a very strong test the session after that. Having frequent tests also allowed us to experiment with various factors such as testing locations, days of the week, times of day and testing type.
The idea of prototyping and iterating — both through the work of CUTGroup and through Ernie’s work as a Civic Technology Fellow may have also contributed to embed user feedback within the City. Once we established basic models, the City of Miami applied lightweight versions to their current innovation processes. Iteration, according to Michael Sarasti of the City of Miami, has been “a central component of our digital service design model with [website vendor] OpenCities: Process mapping → creating services → testing services; Repeat.”
CUTGroup Miami would require an additional amount of effort — financial and otherwise — to become a self-sufficient program.
These things cost money; money organizations are not as eager to spend. Our grant was for $100,000, of which we used around eighty percent of the budget, primarily for salaries, contractors, and gift cards.
I believe people representing the City of Miami and Miami-Dade County were pleased with the work we’ve done. I also think the partnerships were easy to create and maintain, because funding was given through the Knight Foundation, rather than procured by a government to a third-party.
The Smart Chicago Collaborative organization ran the original CUTGroup. We don’t have the luxury of a similar structure here, but I do think there is an inherent value to keep projects like CUTGroup Miami going. As to how the project could continue, organizations could introduce the following models:
- Looking into profitability models, including working with private and for-profit organizations and companies
- The selling of user data, while controversial and ethically questionable, would be an option for sustainability
- Working with NPOs with fundraising capacity to adopt the CUTGroup Miami project
- Taking on the risk of running an organization similar to what Smart Chicago Collaborative did with the original CUTGroup in Chicago
While there was initial talk that one of the three original organizers of the grant would look into creating Open Miami as a possible way to keep the project going, this will probably never happen, due to the resource constraints of the original grant writers.
In the meantime, we decided to sunset this project and opened up our documentation as much as possible with the hopes that other groups wanting to do similar projects in the future will be able to do so with greater ease.