Loading….50% Complete!

Simran Jobanputra
CMU MHCI Capstone 2020: Gov AI

--

With this post we officially celebrate the halfway point of our MHCI experience — crazy isn’t it! The past few weeks have been reflective. As a team, we have been mapping out what we have accomplished thus far, taking critique and feedback to tweak our project trajectory since our spring presentation, and thinking about how the trying times with increasing unemployment make our project even more relevant then when we started off in January.

As we continue into our next phase of research and design, something that continues to guide our team is finding the delicate balance between accomplishing user goals and client goals. How do we continue aiding as many humans as we can in having their basic needs met while working with an emerging technology solution — something that our client can continue to use in their future business model?

Enabling Others to Hear Voices of Need

We ended the spring semester with a bang — presenting our project ‘Increasing Access to Social Benefits through Design.’ We had a great turnout and some really great questions from our audience!

[our team successfully completing our final spring presentation — remotely, of course!]

In our presentation, we covered our research efforts and how we translated those efforts into design decisions. Our approach had two different parts:

Part I: Understanding the application process through many different avenues

We discussed the different stakeholder groups we spoke with including but not limited to government stakeholders, government tech companies and benefits applicants through interviews, observation, and guerilla research. We also held a focus group that was a pivotal point for us in terms of understanding our user needs. We were able to boil down our extensive research into the following six insights:

01 Lack of transparency in decision making leads to delays and misunderstandings

02 Misconceptions about eligibility disincentivize applicants

03 Societal stereotypes about receiving benefits negatively impact recipients

04 Applicants feel they must go out of their way to manage their own cases

05 Technology & prior experience don’t guarantee the process will be easier

06 Applicants want agency in the process & support when feeling stuck

Part II: Prototyping using a ‘slice of the pie approach’

We also presented our “slice of the pie” approach where we focused on students as a subpopulation to show how our screening tool could parse complex policy and enable the maximum number of applicants to be screened. We chose to focus on students for this initial prototype because students can be lead users in their communities — they can inform and assist coworkers, friends and relatives and they are avid users of technology, with 39.3% of millennials using voice tech at least once a month.

Kicking off the Summer with an Inspiring Vision

Hosting the visioning session with our client came with a lot of questions when outlooking the far future for Gov AI’s business goals and based on short-term goals as well for the capstone semester. The possible extension to other segments of populations was a key point in our discussion as we have previously focused on students. In terms of the scope of the program, we discussed the possibility of integrating with other programs such as LIHEAP from the SNAP program as a starting point. Furthermore, the team honed in on the topic of how we can make the screening tool more advanced to better meet user needs, especially those who just own mobile phones. Moving forward, we wanted to deliver an experience that connects the value to conversational design even through SMS-based interactions facilitated via mobile phones.

Using Secondary Research to Answer Pressing Questions

Through surfing through articles to perusing online sources, we dived deeper into using conversational interfaces and AI to improve citizens’ interactions with the government. Interestingly enough, we realized that the government is trying to invest more in conversational toolkits like having virtual assistants much like Emma, which is a chat-based conversational interface that answers frequently asked questions. Alexa skill is also an initiative that’s been seen in the public sector by enabling citizens to submit 311 requests. We also looked into other products like mrelief, which is a non-profit technology startup that includes an SMS conversational interface to help families/individuals obtain SNAP benefits. It covers the end to end spectrum by having people through the application to the recertification process, starting from eligibility. What was eye-opening was the fact that it was able to complete an otherwise 20+ page, paper-based application over an 8-minute SMS conversation.

mrelief, an SMS based screening platform

We realized as a team that smartphones and mobile phones are the most common device types (rather than say smart speakers) and for us to increase the bandwidth of users who can access the conversational flow, we decided to leverage conversational interfaces, particularly SMS-based (via text message). Because at the end of the day, Alexa skill alone is not enough and we want to deliver an omnichannel approach. Inspired by our secondary research, we are currently ideating based on different use case scenarios and how usage context can shift across different device types. What calls into question is how we can further ensure consistency across these different device types.

Building empathy into voice interactions

Building a voice interaction that feels human has been a goal of our client and of the team since the start of the project, and we’ve taken to seeking out secondary research on the topic of building voice agents that display empathy. Specifically, we found in a 2013 article that a team of researchers tested robot voice pitch, which has been shown (along with pitch range, volume, and rate of speech) to be indicative of characteristics of individuals. People who speak more quickly with higher pitch, higher volume, and a higher pitch range are more likely to be extroverted, while the opposite is true for introverts. In addition, voice pitch can convey maturity; those with a lower pitch are seen as more mature, while higher pitch is associated with immaturity and being more emotional. Because of this, lower pitch can also be associated with empathy, which is generally seen as coming from a place of greater emotional maturity in understanding others’ situations and feelings. Further, studies have found that impressions of those with lower-pitched voices tend to reflect lower agreeableness and warmth but higher dominance and assertiveness.

What the authors found supported the hypothesis that lower pitch would be seen as higher in trustworthiness and empathy. That said, this understanding of the robot as being higher in empathy did not significantly improve the experience for users, while the higher-pitched, “friendlier” robot did. It’s important for us to consider how important a particular belief on the part of the actor engaging with the voice agent is compared to their experience. Arguably, for us, building empathy into the interaction in such a way that a person feels that they’ve been heard is more important than providing a pleasant experience.

In general, we expect that people will not be having a great day when they need our product. To that end, we also learned from a veteran conversation designer for Google that we should be careful in how we build the language of the interaction as well. If we want to provide an empathetic experience, we should avoid language like, “I’m sorry.” These kinds of emotions are best utilized when the actor is in a more positive affective state as they are more willing to suspend disbelief. In our case, we expect a generally more negative affect, and offering emotional connection in these scenarios has been found to cause actors to become angry or annoyed by what feels like a disingenuous offering. Instead, we were advised that making the actor feel that they’ve been heard or acknowledging that what they’ve experienced is indeed difficult may be more effective. “I’m here and I’ve heard you” can be powerful, even from a bot.

Following the Yellow Brick Road: Understanding our User Journey and Finding Opportunities

To begin our summer semester efforts, we reflected on the user journey from the discovery of benefits applications to the post application process in order to find other opportunities of where our screening tool could be useful for applicants. One interesting use case that came up was using the screener to understand how a benefits estimate may have changed for someone that is being re-certified due to a changed household size or employment status.

User Goals in the screening tool

Simultaneously, we discussed current pain points and goals of using the screening tool and highlighted how we could cater to more humans in need. Our proposal is a multimodal approach, where we provide both a high-tech path to screening and a low-tech path to screening ensuring that comfort with technology is not a barrier to accessing state benefits. Our initial target user is an interested, first time applicant that would use this tool to better understand what they are eligible for. The high tech path is focused on voice as an entry point while the low-tech path is focused on email/SMS/web as entry points.

User Goals for Screening:

Goal 1: Getting a specific screening question answered, understanding eligibility status

Goal 2: Feeling confident that the application is complete and correct

Goal 3: Understanding how you are evaluated for benefits

Goal 4: Screening for more than one benefit at the same time

Goal 5: Moving from the screening process seamlessly into the application process

We fleshed out the hi-tech and a similar low-tech flow for Goal 1. Hand-offs between the hi-tech and low-tech flows are being investigated to ensure that users settle on the mode/path that’s right for them, regardless of their point of entry (this includes considering the end to end flow and device responsiveness).

[High Tech Flow for User Goal 1]

Our next step with these flows is bridging the gap between opportunities in the user journey and pain points in the screening tool.

Establishing our Roadmap for the Remainder of the Semester

We began the process of mapping out our goals and deliverables for the semester. The first half of the semester will be geared towards recruiting first-time applicants and other relevant groups for vital testing and feedback sessions as we work towards our four primary goals for the conclusion of our efforts.

Up Next

Goal 1: We’re aiming to continue to focus on delivering a benefits screening tool as a proof-of-concept for PA SNAP application process via Alexa skill and eligibility determination calculation. This begins by recruitment of users, opportunities drawn from user journey maps, and testing tools like usertesting.com to gain quick feedback throughout the iterative phases.

Goal 2: We’re fleshing out the requirements for creating an omni channel approach that covers different usage contexts and device types as we’ve previously stated before. With SMS-based conversations, we’re focusing on the different handoff process at which the screening tool can transition to the application process and from there — recertification.

Goal 3: We wanted to deliver actionable artifacts in forms of blueprints and maps to our client so that next steps can be shown in a digestible format. This includes the competitive analysis to assess market fit of our product across currently existing channels, multimodal flows with different entry points, and emergent future tech avenues that our client can take.

Goal 4: We’re trying to stay on top of final capstone deliverables in the midst of all this so that we can wrap up our project with a successful presentation.

[Project Roadmap]

Till next time, signing off as humans.

--

--