Onto the Materialization Phase

Christine Chen
Designing for Inclusion in Healthcare
8 min readNov 1, 2020

Oct. 31, 2020 (10.25.2020–10.31.2020)

Moving forward from our mid term reviews, we went into materializing and honing out the details of proposed design solution- the crowdsourcing app for the Pittsburgh community. We started by thinking through and revising the experience of our mobile experience because usability and functions always comes before visuals to achieve a great user experience.

Competitive Analysis of Existing Crowdsourcing Applications

The first thing we did was a competitive analysis of existing crowdsourcing applications to understand the pros and cons of the application which we can learn from. We critique them based on our experience of going through each of them, and the 10 usability heuristics for user interface design developed by Jakob Nielsen.

Blindways

Pro:

  • Works with screen reader
  • Lets users find closest bus stop around
  • Join later- lowers the threshold for users to be willing to explore the app (don’t need to create account rightaway)

Cons:

  • Too much context rightaway (uses image)
  • Doesn’t allow you to search for a place or bus stops with words, can only search bus route numbers — can’t find closest bus stop near a specific place through searching

Be my eyes

Pros:

  • Walkthrough and test during onboarding process
  • Notification
  • Clear status
  • Stories — forum where they share experience with the app

Cons:

  • Requires account for usage (but makes sense in the app’s use cases)

Google map

Pro:

  • Doesn’t require you to create an account, until you want to save information
  • Lets users save their favorite locations
  • Can see all of one user’s review on their page
  • Give badge/level for users based on their contributions (incentive, fb pages and groups does similar things)
  • Can search for categories of nearby places
  • Filter (based on distance)
  • Simple interface with clear hierarchy
  • Clean, inviting colors
  • Consistent branding

Cons:

  • Complicated app (does too many things that it’s a bit overwhelming)
  • Only allows for crowdsource reviews if the location is “registered” (which makes sense for google’s case but might not work for our case)
  • Can’t tell popularity of a place(for reviews) based on overviews

Nextdoor

Pro:

  • clear visibility of system status
  • recognition rather than recall
  • asks for consent/rule before entering app
  • lets you know that they need to use your information

Cons:

  • lags a lot
  • too many features and it’s confusing

AccessNow

Pro:

  • clear explanation/direction on the landing page
  • location pins are color coded (red- not accessible, yellow — partially accessible, green — accessible)
  • log in not required, until you want to rate the accessibility of a place

Cons:

  • Bottom navigation bar has categories that include different functionalities which makes it confusing (hierarchy issue)
  • can only add accessibility review with searching

Crowdsourcing Experience

Because our design solution would be focused on a crowdsourcing app, we also conducted research on what makes a great crowdsourcing experience. We looked into several articles online and multiple research papers about the topic of crowdsourcing.

Business Process Crowdsourcing: Theoretical Framework to Support the Decision to Crowdsource & Guide Our Designs

A process model for business process crowdsourcing (BPC) establishment

We learned about the business process crowdsourcing model, a theoretical model that explain the operation model of a crowdsourcing process from researcher Nguyen Hoang Thuan’s paper “Business Process Crowdsourcing: Model and Case Study.” This model explains the different components necessary for the 3 stages of creating a crowdsource platform. “Stage 1: Decision to Crowdsource” of this framework is dependent on 4 factors- task properties, people, management, and environment. Based on these 4 factors, we reevaluated our decision to utilize crowdsourcing as a part of our solution to see if crowdsourcing really is a suitable strategy for our approach. After evaluating these properties, we are glad to find out that the evaluation supports the use of crowdsourcing with our solution.

A framework that supports the decision to crowdsource

The Motive-Incentive-Activation-Behavior Model of Crowdsourcing

From the research paper “A Methodological Framework for Crowdsourcing in Research Michael” by Michael Keating and Robert D. Furberg, we learned of the motive-incentive-activation-behavior model of crowdsourcing.

Understanding how motivations can be influenced and activated through intrinsic and extrinsic incentive pathways is a critical aspect of designing effective crowdsourced interventions. Rosenstiel (2007) provides a simple model to describe the activation of human behavior on the basis of motive-incentive-activation-behavior, or MIAB.

The Motive-Incentive-Activation-Behavior Model of Crowdsourcing

In accordance with the Motive-Incentive-Activation-Behavior Model (MIAB), in a specific situation a suitable incentive will cause an individual’s corresponding motive to be activated and lead, as a consequence, to the manifestation of a particular behavior.

Based on this model, an incentive is needed to motivate users to activate (the concept of activation refers to an individual’s decision to initiate a behavior) their decision to participate in the crowdsourcing activity, which would lead to the final outcome (input of information).

Incentive Mechanisms for Crowdsourcing Platforms

To understand incentive mechanisms better, we look into researches on successful incentive mechanisms for crowdsourcing platforms, so that we may design to lower the threshold of participation for users.

According to the paper “Incentive Mechanisms for Crowdsourcing Platform,” there are 7 types of incentives: (which can also be categorized into two main categories of intrinsic and extrinsic motivation)

  1. Learning/personal achievement
  2. Altruism
  3. Enjoyment/intellectual curiosity
  4. Social motives
  5. Self-marketing
  6. Implicit work
  7. Direct compensation

Based on our previous research insights, the incentives for existing community member and new neighbors to participate would be learning, altruism, enjoyment/intellectual curiosity, and social motives.

We also looked into incentive mechanisms for crowdsourcing platforms to learn about how to incorporate these incentives into our crowdsourcing experience. The following 3 are the ones that would work for our platform:

  1. Social incentive mechanism: Social motives play a big role for initiating participation. To trigger the social motive, the social incentive mechanics can act as enablers of social interactions to give users a change to gain status in the community (one idea could be giving users “level status” based on their input content, Google reviews is an example for this). The application would also support socialization between users.
  2. Gamification: Gamification is defined as “the use of game design elements in non-game contexts in order to improve user experience and engagement.” Game design elements, aka “game mechanics,” include self-elements, such as points, achievement badges, and levels. These mechanics let users develop their own skills, be creative, and feel competent, while experiencing an often social and enjoyable activity, and motivate them by rewarding their efforts and providing appropriate and timely feedback. Thus, gamification corresponds successfully to intrinsic motives such as enjoyment and social recognition. (An example would be Duolingo- allows users to gain levels as they go through the course.)
  3. Reputation systems: Reputation systems are commonly used to increase user participation and quality of contributions. In this mechanism, the platform’s users rate other users based on their behavior, and the reputation system combines these ratings to form cumulative assessments of their reputation. This mechanism could be utilized for quality control of the platform’s inputs.

Design Considerations

After conducting competitive analysis and research on crowdsourcing mechanisms, we noted that there are several things that should be taken into consideration during the design process:

  • Compelling and intuitive simple UI as well as appropriate feedback that indicates a user’s progress towards mastery, can also encourage and sustain user participation.
  • Design decisions involving the incentive mechanisms should be carefully considered. The presentation and placement of incentive mechanisms elements, such as reputation score and visual indicators, should be balanced out well as too much focus on them would make the user stray away from the true purpose (pursuing points might become primary goal of participants, and by merely incorporating game elements into tasks, the task might not become necessarily more engaging and interesting.)
  • Reputation metric should be chosen according to the goals of the system and the desired user behaviors

Mobile App User Flow

With understanding of how we would implement a crowdsourcing system into the experience that we are designing, we moved on to laying out the user flow of the mobile app.

App’s purpose: see the city from community member’s perspective

User flow #1: Existing community member

  • Open the app
  • allow access to location
  • Users enter credentials, can also skip now
  • brief intro of application’s usages, value proposition
  • Enter home screen
  • (alt 1)
    - See their location on the map, and other people’s inputs
    - Add an input (including in a recording of , edit the input, and submit
    - Walk around and see other people’s input of the location
    - Like other people’s review
  • (alt2)
    - Search for location to provide input

User flow #2: New Neighbors

  • Open the app
  • allow access to location
  • Skip now
  • Lands in the homepage
  • Click on popular inputs
  • Create an account to favorite it

We didn’t really clarify earlier on, but there would be two types of locations that users would be able to input in our system:

Two types of locations

  • Notable registered location( just regular places on the map like stores, restaurants, museums, etc. ) : user can input their input freely
  • Notable location of the community that is not a registered place : for example, a bench under a specific tree in a park that is good to sit down and read books. it is not a registered location, but it may be a notable place of the community. (rather than notable place of an individual) so, this input/pin will be only visible when the amount of input of the location reaches the certain number within the system
  • When inputting those locations, users add visual/sensory hints/cues for the visually impaired neighbors .

Visual

The main focus of this week is to think through the backend of our app to solidify its backbone. We also discussed a bit over the visuals briefly, however, since we also have the other print-based part to our campaign (which would lead users to discovery of our application). One idea was to design around the visual concepts of puzzle, since the users contributing to the app is similar to piecing together puzzles to create a larger, more inclusive community. We wish to create characters with geometric shapes and bright colors to hint at diversity.

Next Steps

Having all the information and backend mechanisms all thought out and with full understandings of the constraints/limits/abilities of crowdsourcing, we are now ready to move on to honing out the screens in low fidelity form with the paper prototyping technique, and doing several evaluations of multiple versions of these with participants. We will be doing paper prototyping technique through parallel prototyping, where each of us ideate the screens separately to come up with several more different ideas for broader explorations. As learned from our other HCI courses, the key to designing great digital products with successful usabilities would be to conduct as many cycles of design and iterations with low fidelity before going into creating high fidelity of prototypes. Creating high-fi prototypes early on for screens is time-consuming, and might narrow down our ideation as it is easy to get attached to high-fi designs. Also, when doing usability testings with high-fi screens, participants might evaluate the visuals a bit too much when we really want to test out the functions first.

--

--