Sprinters Without Borders, Part 2

Notes on adapting design sprint methods outside the US context

Part 2: Practice

How might we design a financial inclusion app that will be used by Somali refugees in Kenya?

As the senior product manager at aidx, I’ve been working on the answer to this question. In part 1 I wrote about the theories informing our design practices. What about the details? How does it work on the ground?

We recently spent a month in Nairobi implementing our approach to this design challenge. We ran a design sprint to build a prototype, conducted user testing, gathered lots of new data on the Kenyan financial ecosystem, and made some great new connections in Nairobi’s thriving tech and financial inclusion sectors. I’ll describe how the design sprint unfolded below.

Overview

Our overall approach was driven by the theories I outlined in part 1 and human-centered design methods that tap into the existing cultural and technological context. We used these methods in Eastleigh, which is Nairobi’s biggest Somali neighborhood. They included ethnographic research, co-design, crafting personas from our interview data, and lots of camel milk tea. To incorporate design justice thinking we selected familiar and accessible venues and brought in community members as experts and participating designers. And to abide by the “do no harm” principle we spent a lot of time explaining why we were there and what we were trying to accomplish, and eliciting concerns, questions, and feedback to make sure people understood our motivations and we understood how our activities might impact members of the community.

For our design sprint, we decided to work with and adapt the methodology from Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days by Jake Knapp, John Zeratsky and Braden Kowitz. The methods described in the book were familiar because I had used similar tools and processes at MIT and in a prior startup, if not in such a compressed timeframe. But I was also aware that the book describes design sprints in Silicon Valley, not in Kenya, and, umm, things work a little differently in East Africa. I had previously worked in East Africa for several years, and this background experience definitely helped with planning and execution. I’ll note below where our activities diverged significantly from the book.

With these methods in mind, what follows is a breakdown of our design sprint, to make some notes for our own future reference, and in case these notes might help other sprinters without borders.

Background Work

It’s important to state that we weren’t starting from scratch. We had already conducted a lot of qualitative user research to arrive at the problems we wanted to address. We had used techniques like affinity analysis to turn our qualitative data into a wealth of insights about existing behaviors and the pain points and friction people encountered while trying to survive in difficult circumstances. We had created personas to help us keep our user’s lived experience and circumstances in mind. And as we narrowed down hundreds of product ideas into a few of our best, we had made notes about the key questions and underlying assumptions behind those ideas.

The Team

The aidx team in Nairobi

We’re a small but scrappy startup currently bootstrapping our way toward big social impact. That’s a very different starting point than the Google Venture-back companies that form the real-world examples in Sprint, which recommends a team of seven for the week-long process. We’ll do cartwheels down Massachusetts Avenue when we hire our seventh employee. So assembling a team was our first area of modification.

We decided to hire some of the people we had interviewed during our previous qualitative research who seemed especially savvy about the complicated ways people send, receive, give, loan, borrow, and pool money. And we decided that the several days we would spend sketching possible solutions were the most important time to have them join us.

The Plan

Sprint is a snappy read and a tidy encapsulation of how to tackle certain design problems in just five days. But “snappy” and “tidy” don’t really describe life in Nairobi. Your 20 minute drive might turn into a two hour nightmare on a rainy day. A power cut might derail your carefully planned presentation. Things happen, and a flexible mindset is essential. We built in two days of flex time so we could adjust our sails to the wind.

We also stretched some of the processes out over a few days. There were several reasons for this. We were building a new product from scratch and needed to test several key aspects of the idea in a single sprint. And involving customers in the actual design of the prototype required time to get everyone up to speed.

Here’s what happened day-by-day:

Day 1–2: Brochure Feedback

We spent our first day presenting customers with two product brochures outlining different offerings, looking for the ideas that generated the most interest to narrow down product features. Despite our best efforts to line up these interviews in advance, we ran into some problems meeting up with several participants and ran two out of the five interviews we had hoped to conduct. So we decided to go ahead and use one of our flex days to get the rest of the interviews and start with a good data set. The second day we successfully interviewed three more people. We also collated our interview notes, summarized our findings, and used the data to narrow down the product features we were planning to include in our prototype.

Day 3: Journey Map, Selecting Design Targets

Our process of creating a journey map actually resulted in two maps: one that detailed the process of installing the app and creating an account (which included user information to meet Know-Your-Customer regulations), and a second that outlined the process of using the app. We identified the most important moments in these two maps which became the three targets that we would focus on during our three days of solution sketching. We felt the map was a good representation of the process because it was based on the qualitative research we had conducted over the past year, but we were also keen to check the logic of the map with our co-design participants when they arrived the following day.

Day 4: Sketching User Identity

We grew our team at this point by including people from the community we hoped to serve. After our co-design participants arrived, we introduced them to the process we were using, explained what we’d done already, and what activities we had planned for our time together. We spent about an hour getting their feedback on our maps and their expert opinions on our first design target, which was about user identity. We documented key insights on a whiteboard.

Then we each selected a product or service that was an interesting example of how to handle user identity, discussed what made it work, and made notes about the elements, processes, and user flows. This was a really helpful exercise, as it turned out to be an opportunity to do a deeper dive into a couple of products that might not be at the top of people’s minds in the US, but which were common in the Kenyan market, and to learn about them from the perspective of real users. We spent about 15–20 minutes on each product to make sure we understood it well.

Finally, we followed the sketching process outlined in the book, including the recommended time constraints on each activity. By the end of the day we had each created a solution for the user identity design challenge.

Days 5: Sketching Group Behavior

A solution sketch on a wall with heat map stickers, tagged with key ideas

The next day our co-design participants returned, we put the solution sketches for user identity up on the wall, and did a heat map exercise to get a sense for the parts of these solutions that were most interesting. We conducted a vote to pick the strongest idea.

Then we moved on to the next design challenge, which was about the group rules that are used in informal money pooling. We asked our experts lots of questions about how this worked in their groups, and made lots of notes on a whiteboard. We then each identified and presented a product that handled the idea of rules in an interesting way, and we drew the design elements and user flows on a whiteboard. Then we went back to sketching solutions. By the end of day 5 we had each drawn screens demonstrating our best idea for how this could be handled in an app.

Days 6: Sketching Money In, Money Out

On day 6 our co-designers returned one more time and we used the same structure for the day, looking at our solutions sketches on group rules from the day before and using a heat map and voting process to determine the strongest ideas.

Then we moved on to our third and final design challenge, which was about getting money in and out of the app in a transparent way. Our experts were again invaluable in describing how this works currently in different groups, and our product research gave us lots of good ideas to build from. We all sketched possible solutions imagining how this might work.

Reflections on the Sketch Process

None of us had done all of the solution sketching exercises in the Sprint book before our own sprint, and we saw a clear progression in the quality of our solutions sketches, which got better and looked increasingly like real app screens with each passing day. When we do another design sprint in Nairobi, it will be tempting to work with the same co-design participants since we have all progressed up the learning curve.

Because we had three design targets it also worked really well to modify the schedule of activities suggested in Sprint and move the “ask the experts” session into the same day as the product demonstrations and solution sketching so it was fresh in our minds.

We had been worried that our co-designers would think the process was strange and would struggle to connect with the exercises, but they were enthusiastic, engaged, and excited to contribute to an app they might use. The three community members we brought in were wonderful domain experts, and we got to see how they would design their own solutions to the problems they identified.

There are definitely strong benefits to the co-design methods we added to the Sprint process, especially if the team doesn’t already include potential users. We will absolutely use this technique again for future sprints.

Day 7: Decisions and Storyboarding

After all of the sketching, we were excited to start stitching the best ideas together into a storyboard. After taking time to heat map and vote on the money in/out solution sketches, the storyboard came together surprising fast. We ended up with 29 panels in our storyboard, and a few of them represented several screens, which was well beyond the 15 or so recommended in Sprint. We also knew we wanted to build the prototype in English, then translate the text into Somali so that we would have a version for our testers who might prefer their native language. On top of these challenges, we were down to 3 people to build the prototype, do the translation for the Somali version, and create the user testing script. Good thing we had two days! It was still a lot to bite off, but we also had prior experience building wireframes for mobile, so we got some sleep and prepared to dive in the next day.

Day 8: Start Building the Prototype

This day was a blur, frankly. It was head-down time cranking out screens and making a bunch of small, quick decisions. We used Keynote with the slides sized to an Android app screen, and we used Marvel to stitch the screens together using hotspots. We divided up the tasks and checked in with each other regularly to make sure we were more or less on track and not stuck on anything. By the end of the day we had around 50–60 screens built in Keynote. We uploaded some of the screens in Marvel and added some hotspots and transitions as a sanity check that it would all come together as expected the next day.

Day 9: Finish Building the Prototype, Walkthrough, Write the User Testing Script

We finished up the last of the English-language screens (there were 70 total!) early in the day, then one of us started adding the hotspots in Marvel while another person worked on the Somali translation. We used Google Translate, which was going to make for a rough translation, but we hoped it would be “good enough” since we didn’t have another quick option. One nice thing about Marvel is that if you replace a screen with an image that uses the same filename, the hotspots are preserved. So we duplicated the entire 70-screen English version into a second Marvel project, temporarily gave our Keynote Somali version the same filename as the English version , created 70 new screen images, then swapped in the new images containing Somali text, and no hotspots had to be recreated. It felt like a nice little piece of magic.

The team member who would be running the user testing sessions had been out of the country unexpectedly for this week, but had thankfully arrived the day before and joined us. We did a walkthrough of the app as a team, and it was really exciting to see it on a phone with functional buttons. Then we talked through the tasks that would form the backbone of the usability test and wrote a script. We were ready for user testing!

Day 10–11: User Testing, parts 1 and 2

Our venue for user testing, located close to where our testers live

While we had been cranking away at the previous week of work, we had hired a community member to help us recruit participants for the user testing sessions. He had helped us find people to interview for our previous qualitative user research, so we had a good relationship with him. We gave him some criteria for the kind of people we wanted to test with, and had checked in with him throughout the week to see how recruitment was going. We had also secured a venue for user testing as close as possible to Eastleigh, which is the predominantly Somali neighborhood in Nairobi, after we couldn’t find one in Eastleigh itself. Traveling around the city can be very time consuming, so we wanted to keep travel time to a minimum for our participants.

Given the challenges of transportation, we hired a taxi to pick up our testers, and scheduled the tests over two days instead of one. This turned out to be a good decision. On the first day we got to the venue on time and started setting up, but our recruiter called and told us that the first tester was going to be over an hour late. It turned out the delay was to our benefit, because we were struggling with our technical setup and needed every extra minute to get it working!

The Setup

We were in two small meeting rooms in a hotel on the East side of Nairobi’s central business district. We had done a speed test on the WiFi connection earlier that week, and thought it would be fast enough. We didn’t have a document camera to point at the phone our testers would be using, but thought we could use a mobile phone with a GorillaPod to point it at the test phone and stream video to the next room. At the same time we were going to set up a laptop facing our testers for a second video stream so we could see their reactions as they used the prototype.

Makeshift “document camera” pointed at the prototype and laptop for second video stream

Our first challenge was with video rotation. While trying to position the “document camera” mobile phone above the testing phone the resulting video was either upside down or rotated 90 degrees, and we were struggling to find a way to lock the phone rotation so it was consistent. We finally settled on a way to position the “document camera” mobile phone to the right of the user testing phone so the video was consistently oriented in the right way.

Our next challenge was the video stream itself. We were using FaceTime for the mobile phone video stream, but the connection kept dropping after 10–15 minutes. Eventually we tried Google Hangouts video instead, and managed 20 minutes without dropping before our first tester arrived, so we decided to roll with it.

We also had Skype open to handle the video stream showing the tester’s face. In the monitoring room we realized the sound was too faint to hear if people weren’t really projecting their voices, so we switched sound over to the “document camera” mobile phone since it was closer to the tester, which worked much better. A few minutes before our first tester arrived we finally had a working setup and, miraculously, the video only dropped a few times over the course of two days!

By the afternoon on the second day we had finished five user tests, and it was time to talk over the results and what we had learned.

Reflections on User Testing

We were a little worried after our first round of user testing. Four of our five testers had fairly low literacy levels, and our Somali translation was not very good, so they were struggling to understand what various screens and buttons would do. Almost everyone failed to get through the four tasks we had set up without some hints and guidance. From the observation room, we could see a look of relief wash over people’s faces when the test was over. In between testers we discussed the situation. One issue was that the testers didn’t match our desired profile very well. We realized that our recruiter had freelanced quite a bit and brought in people who didn’t fit our criteria.

At first we were concerned that this meant we had failed to effectively test our prototype, but as we talked about the results we were seeing, we started to realize this was a useful opportunity to push ourselves to make improvements to the app so that it was easier to use in low-literacy or non-native-language situations. Yes, the results were poor, but we could use our findings to work on fixes for many of the issues and try again. We had inadvertently tested in a more radically different context and with an even greater population diversity than was our intention, but after reflection it felt like a great opportunity, not a setback.

Day 12: Organizing Test Results

Notes on results of user testing, organized by tester and task

We had a lot of sticky notes from the user testing sessions noting things that worked and didn’t work. After sorting them by task and user and putting them all up on the wall it was still visually and cognitively challenging to see the key findings. So we created a spreadsheet and started pulling sticky notes off the wall one by one, keep tally of success, failures, and neutral observations. By the time we had finished this process we could clearly see what had worked and what had failed in our prototype. There were a lot of areas that needed improvement, but it also seemed like incremental improvements would solve the problems, not a completely different approach.

Day 13: Solving Problems and Updating the Storyboard

We spent the next day working through all the problems from our spreadsheet that we found via user testing and discussing how we could fix each one. We thought we might need to limit ourselves to the low-hanging fruit, but found we were in agreement about how to fix almost everything, so by the end of the day we had an action plan about what to change in our prototype.

Day 14: Updating the Prototype

We made the changes in Keynote and built a new prototype from scratch in Marvel. After discussion and feedback from a native Somali speaker we better understood just how poor our Somali translation was, and we opted to test with the English version regardless of the language preference of our testers.

We also simplified like crazy. For example, we simplified the words we used to describe buttons and actions. We added icons in key spots to hint at what the button did in case the language wasn’t clear. And we added a big green button that always meant “next” so that users would have an easier time moving through the account setup process. We did a run-through of the new version with the whole team and updated our user testing script accordingly. We were cautiously optimistic that the results would improve, but we weren’t sure how much.

Day 15: Test Again

One of our testers

We had asked our recruiter to scheduled five more testers, and we explained the problems with the testers last time. We were more insistent that the people he recruited should meet certain criteria. We decided to be ambitious and try to fit everyone into a single day, no matter how long it took. We rented the same meeting rooms in the same hotel, and this time we knew what our technical setup would be. And the day went incredibly smoothly. People arrived on time. The video streams held up. But more importantly, testers loved the new version of the app. Previously we had seen creased brows of concentration and people struggling with basic tasks. Now our testers were breezing through the app, commenting on how easy it was, and actually smiling while they used it! Some of that was due to having testers who were more confident with smartphones, but if it hadn’t been for all the struggles of our first testers, we wouldn’t have been as ruthless about simplifying things.


It’s impossible to state just how valuable it is to watch someone try to use a prototype, and we learned so much during the process of building and testing it. For example, we learned that some people are uncomfortable taking a “selfie” as a form of identification, that not everyone knows their birthdate, and that people sometimes perform smartphone tasks by memorizing the steps, not by reading button names.

We also found out that our modifications and changes to the process described in Sprint mostly worked in our favor, although we still wish we had a bigger team. The book gave us a solid starting point, and with some modifications here and there (along with some thinking about design theory) it can definitely work for “sprinters without borders.”

By the end of the design sprint we felt even closer to our future users than we had before, and felt an even stronger sense of solidarity. We sat inside homes, met children, and sometimes shared a meal. We heard about their challenges and their hopes. And we worked side-by-side with some of them to imagine ways to deal with those challenges while strengthening their communities. There’s no greater value than that.