⚡️Breaking news: We made and tested a prototype of a digital tool that educators can use to understand diversity strategies in schools.

Leanne Liu
FAME x MHCI
Published in
13 min readJun 8, 2022

Over the past couple of weeks, our team was readjusting after our week-long spring break. Now that we’re back at CMU again, we’ve rolled up our sleeves and gone back into the groove of capstone. Three main things have been done during this time, which include adjusting to new roles within the team, tackling Sprint 1 and Sprint 2. I’ll give you some more context to what “sprints” are if you are not familiar in the next section.

Summer sprints and roles for our design team

So what is a “sprint,” you ask? It comes from “GV Sprint,” and is the five-day process of designing, prototyping, and testing ideas quickly. The idea is to fail fast and iterate on our design prototypes. Compared to the research heavy spring, we didn’t have much of a sprint structure. Research had to be done in our project, and we were moving as quickly as we could. But for this summer, we are adhering to the sprint schedule.

A model sketch of the GV Sprint.

Another change that summer brought was new roles for each of our team members. In the spring semester, we were all acting as researchers and each took on a little bit of project management tasks. This was to ensure we were all getting experience being fully focused researchers and avoid burdening one person to do all the project management work.

But now, since the sprint structure was adopted into our project, we were all going to be doing different things at the same time. This meant we would need to draw on each of our different areas of expertise. Our summer roles transformed to traditional design and development roles, e.g. Research Lead, Project Manager, and Product Designer. But we all had specialties as well. For example, I am a Product Designer just like Marlon, but I also provide my expertise as a builder as the “Prototyping Lead.” These names for our roles fit our vision to be these types of workers when we enter industry. It helps frame our contributions in a way that best fits each of us.

Colorful poster describing “FAMExCMU’s Summer Roles” in five sections, one section per role/team member. The poster is covered in vibrant and playful sticker graphics such as a flaming brain and a smoking laptop with an angry face.
A visual model of the individual roles and descriptions in our team.

“The spring semester proved to be a self-reflection and learning experience for all of us. We all did research, but it highlighted our interests in our own ways. But now, we can take ownership of the parts of the project that fit our goals as HCI professionals and learn what we need for our future careers.” — Leanne Liu

Sprint 1: 5 days

It helped the team get used to speedy work

Our first sprint functioned as a segue into the work style we’ll be doing over the summer. The spring semester was a juggle for all of our team members, as we all were taking classes on top of capstone. But now that we are all mostly doing capstone work together, it was a test to see how we would develop shared visions as a team, and quickly make systematic decisions. Each sprint consists of a map & target, sketch, decide, build/prototype, and test, but we only had one day for each of these for sprint 1. With this intense turnaround time, it helped us bring speed into the second sprint.

Testing different levels of embodiment in parallel

Coming out of the spring semester Collaborative Session with our client FAME, we found that storyboards that pointed to the need for “analogous experiences” for early Black educators to experience before joining private schools were of the highest importance. Building off of this takeaway, we wanted to test if different levels of “embodiment” would affect how people learned what teaching a class felt like. We parallel prototyped three different modalities to provide an analogous experience of teaching educators what it feels like to teach a new class. Each had different levels of “embodiment,” or closeness to reality:

  1. [High embodiment] — In-person
  2. [Mid embodiment] — Video call
  3. [Low embodiment] — Text-based chat

Re-enacting “students” for our “teachers”

Participants would act as “teachers,” and used a provided lesson plan to teach us a class on color theory. To create a realistic classroom experience for all the modalities, each of our team members took on personas of common student personalities and pretended to be students:

  1. “Class Clown” — Marlon Mejia
  2. “Wise Guy” — Martina Tan
  3. “Distracted Daydreamer” — Swetha Kannan
  4. “Teachers Pet” — Alana Mittleman
  5. “Curious George” — Leanne Liu
Screenshot of a classroom where Martina and Alana look studiously ahead at the front of the class, while Marlon and Swetha play with a basketball behind them.
In our Sprint 1 prototype, we roleplayed as a class of students to cause distractions for a participant who was tasked with teaching us color theory. See pictured, there is a clear division of attention for half of the class which interrupted the lesson.

I think our team turned out to be stellar actors to bring participants into the mindset of being a teacher. All the participants called us “kids” without giving a second thought, forgetting that we were all full grown adults. One participant mentioned that they realize how difficult it can be to teach a class of “kids.”

A screenshare of the lesson plan slide deck about color theory, beside a screenshot of Student 4’s video from Zoom. The student on video has a meme as their Zoom background and a sunglasses filter obscuring their face.
Marlon took great advantage of the Zoom format in his role as Class Clown, while our participant forged ahead with teaching from a slide deck we provided them with.

We learned that all modalities have their own pros and cons

A key takeaway from this prototype was that virtual modalities and in-person modalities were equally valuable to learning what the experience of teaching was like. However, they vastly differed in pros and cons. For example, the group chat made it difficult to answer questions from students because the conversation was so linear. At the same time, it made it easier to ignore distractions.

Two screenshots of a group message titled “Prototype” on Facebook messenger. Students 1, 2, and 4 and the Teacher chat about color theory, with Student 4 providing images from popular culture such as Spongebob miming a rainbow and the “Buff Brown” Among Us character.
Two screenshots of a group message titled “Prototype” on Facebook messenger. Students 1, 2, and 4 and the Teacher chat about color theory, with Student 4 providing images from popular culture such as Spongebob miming a rainbow and the “Buff Brown” Among Us character.
In the chat-based, least-embodied form of our Sprint 1 prototype, students caused disruptions using text messages and performed additional engagement by reacting to the teacher’s chats.

Overall this prototype was quick, scrappy, and not statistically significant. It helped us frame how we should be treating modalities in our design solution. It gave us a good basis to say, “Alright, this sort of thing that we are trying to solve does not do well as an interface,” or, “This sort of thing will do well.” With some experience under our belt, we jumped into reframing our understanding of the right problem to solve in Sprint 2.

“Moving at a fast pace to actually build something is new for us and I think this really forced us to get over our paralysis moving from research to design. The Sprint structure itself does a good job in forcing us to divide out the design process, think about future stages, and get something done quickly. Finishing up the first sprint in a week really set a good tone for sprint 2 where I feel like we got a lot done!” — Swetha Kannan

Sprint 2: 9 days

Map & Target — Scoping the behaviors we want to change to prevent burnout of Black educators

At this point our team is quick on our feet. Since this sprint had nine days instead of five, we had a couple extra days to build and test our prototypes. It was full of new thinking and building for us.

Throughout capstone, we have two wonderful faculty advisors on our team to help us through this project. And after sprint 1, they gave us really helpful pieces of feedback. Since our project domain was in the realm of creating change, they introduced the idea of designs that could change very granular behaviors. But in order to get there, we had to go through some reframing activities.

Our team created and used a model that fit our design project, which we call the “Behavior Change and Effect Model.” If you’re a formula kind of person, this might be for you.

A Miro board outlining our Behavior Changes and Effect model in a visual formula. The text explains headings about our “Key takeaway”, and our two Design Frames: “BE’s feeling in control of their own careers” and “BEs can choose whether or not to focus on DEI”.
Our Behavior Change and Effect model provided a framework for situating our ideation for Sprint 2.

[Current behavior] can be changed by [solution], then they will [Desired behavior] and will [Effect].

  • Current behavior — the small action that your target users do, which represent or cause a problem.
  • Solution — the design intervention you provide.
  • Desired behavior — the new action that your target user will do, with the integration of the solution.
  • Effect — a greater goal, often abstract, that you are working towards.

With this structure, our model reads:

Black educators take on more work in regards to Diversity, Equity, & Inclusion (DEI), and can be changed by [solution (described in the next section]. They will see how they fit into the school’s policy and take on work that meets their capacity, resulting in feeling a sense of control in their careers.

A Miro project titled “Sprints Focus Board”, which encompasses workspaces titled “5/26 Abstraction Laddering”, “5/26 Conceptual Mapping”, “Post-Crit”, and “Behavior Changes and Effects Model”.
Iterative, agile thinking at its best — we went through multiple reframing activities such as Abstraction Laddering and Conceptual Mapping to focus our design question for this sprint.

To fill in the gaps, we did multiple reframing activities. One of which was Abstraction Laddering, which helped us get out of the habit of prescribing a solution into our research insights. We got new “How Might We” (HMW) questions out of them. Then, we made Concept Maps to identify current behaviors that contribute to the HMW question. Taking that into account, we built out the rest of the Behavior Change and Effect Model, and found that we should be designing for Black educators to regain control over their own careers.

Sketch — An interactive experience with DEI strategy at schools

Once we had good footing on the scope and design problem we were honing in on, we took a day to sketch possible solutions. Using this model helped the team stay focused when sketching ideas. It was effective in not boxing the team into a particular modality of solution like we have done before.

A sketch of an intervention by Martina Tan, with a title and text description of the sketch to the right. The image is overlaid with four colorful dots indicating the team’s vote to develop the idea.
After every team member sketched an idea related to our design question, we voted on our favorites. This one, titled “Model the components of existing DEI initiatives at schools”, came out on top.

After the whole team sketched ideas, we all voted on the idea we wanted to test. The winning intervention was called, “Model the components of existing DEI initiatives at schools.”

Now that we had our sketch, our design statement reads:

Black educators take on more work in regards to Diversity, Equity, & Inclusion (DEI), and can be changed by interacting with a model of the DEI strategy at schools. They will see how they fit into the school’s policy and take on work that meets their capacity, resulting in feeling a sense of control in their careers.

Build — Prototype the interaction to be intentional

We figured out the idea through group sketching, but still needed more information to figure out what exactly it can be used for and how it can be tested.

Leanne explaining a whiteboard drawing and flow of the sprint 2 prototype.
To align the team on what we were building for the second sprint prototype, Leanne whiteboard a skeleton of a protocol before we delegated work out to individual team members.

As the Prototyping Lead, Leanne took the lead to formulate the general flow of the prototype, participants, and what the session would require from the team. These plans were corroborated by the team’s in-house experts. For example, Alana fleshed out the evaluation methodologies to measure how much participants were able to learn from our prototype as the Research Lead. Furthermore, Swetha took her experience as a journalist to create a captivating context setting in the introduction. Martina brought forth her logistical expertise in event planning and project management to further develop the prototype testing protocol so that it could be referenced and replicated for multiple participants. Finally, Marlon took our paper prototypes and turned it into an interactive digital interface for testing at lightning speed.

The prototype testing session was comprised of:

  1. Introduction — Consent, setting the stage, pre-survey for collecting demographic data.
  2. Prototype — The participant interacts with the prototype.
  3. Post-Test — The test is designed to evaluate how well the user understood the prototype’s visual model of DEI.
  4. Exit interview — Quick chat with the participant for further context.

And to scope our efforts, we referred to the 5 dimensions of prototype fidelity by Kyle Murphy. We knew we wanted the participants to be able to understand the DEI policy well, so the ranking for “Content fidelity” was 5, at the highest. And we didn’t want this to be static, so we ranked “Interaction fidelity” as a 4, as the secondary priority to test.

Prototype dimensions (Ranked from a scale of 1 (least focus) to 5 (highest focus))

  • [5] Content: How real is the stuff and is it contextual to the user?
  • [4] Interaction: How real does it feel?
  • [2] Visual: How real does it look?
  • [2] Depth: At a given level of breadth, to what degree is the user constrained?
  • [1] Breadth: To what degree is this the whole or just a part?

To specify what we wanted to understand through this testing session, we developed a hypothesis:

“If Teachers’ Academy Fellows interact with a visual model of a school’s overarching DEI plan, then they can better understand and critique the institution, as well as their own role and capacity in contributing to a DEI plan as an educator.”

A sketch of the DEI model interaction by Leanne Liu.
Paper low-fi prototype by Leanne Liu and Marlon Mejia.
In order to build our model of a DEI plan as something for educators to interact with, Marlon and Leanne made a bunch of sketches to visualize what this model would look like.

Now that we had a concept to test, the question was now about “but what does it look like?” Since we wanted to show early Black educators what kind of system they will be becoming part of, the base concept was an interactive stakeholder map.

It is important to note that we are not prototyping a DEI strategy algother, but only testing how usable and understandable a new visualization of an existing DEI strategy might look like.

Alana kicked us off by researching the DEI strategic plan used by an existing private school in Pittsburgh, and creating a model of it for the team to reference. Based on her initial research work, I collaborated with Marlon to create a low fidelity prototype using paper and pens. We were able to visualize how a stakeholder map may become interactive as well, as we built on top of the design together.

A grayscale interface with a selection of scenarios on the right, and a display of four circles on the right representing School Operations staff, a Board of Trustees, Community, and Specialists (OEI) at St. Edmund’s Academy.
The starting screen of our visual model on Figma highlights a few scenarios that educators can walk through, as well as the “spheres” of community members and institutional staff that are involved.

Once we came to a quick consensus of the paper prototype, Marlon whipped together a digitized version that we could send to the participants of the prototype testing. Recruited participants were several Black current and former educators.

“Rather than focusing on a high-level view of creating a prototype that was either low or high fidelity, we were able to hone in on particular elements of our design that required more or less attention. We created a “flushed out” design regarding the presented content, ensuring that the material felt natural and in line with what our participants could expect. …This prioritization resulted in our participants having the opportunity to properly critique the DEI model presented while briefly touching upon the usability aspect of the interface.” — Marlon Mejia

Testing — Our extra thinking paid off

It was exciting to see our prototype come to life as we constructed the evaluation protocol and created the interface for it. The team was interested in ensuring that the data we were collecting were actually serving the hypothesis we wanted to test. The extra thinking seemed to pay off, since we were able to ask both survey and interview questions that elicited interesting and varied responses from our participants.

Testing for abstract deltas like “understanding” and “ability to critique” required very intentional definitions of how to assess them. I realized that creating these definitions requires the type of ontological thinking that psychology researchers and curriculum writers have to deal with all the time — it can really make your head spin.

We saw that Black educators were able to critique the DEI model as presented in the prototype.

We’ve done our testing! But what did we learn? We’re about to dig deeply into our findings. But even without the full analysis, we’re finding some important takeaways already.

A Miro board of an affinity diagram containing columns of post-its related to Policy, Role, and Usability.
Our initial affinity diagram of the findings across testing sessions with five participants including administrators, educators, and teaching residents.

For most, if not all, of our testing sessions, we saw participants being able to critique the presented DEI strategy in our prototype. This validates that our prototype is effective in being a conversational tool for people to talk about the weaknesses of DEI strategies, and where it has potential for improvement. In regards to the interface of our prototype, participants felt that it felt that it was not welcoming and needs to be less visually confusing. All of these points of data are important to our design process, and we can’t wait to get more into it.

“I was actually able to take a form of testing I had learned about in my psychology background and apply it here, which was really exciting. It also made me remember how much more grounded my work feels when I can point to large bodies of third party research.” — Alana Mittleman

Next steps: Sprint 3

Now that we have put a bow on sprint 2, it’s time to start sprint 3. The big things that will happen are visiting the Teachers’ Academy Fellows for the first time. We are super excited to experience the Teachers’ Academy program firsthand, and build relationships with the Fellows. When they have free time, we hope to test versions of our prototype with them for more valuable feedback. The prototype concept will likely continue from sprint 2, but might change in regards to the scope, and change in the 5 dimensions of the prototype as mentioned in the previous section.

“Our team is gearing up to meet with the Teachers’ Academy Fellows for the first time and make the most of it. I’m curious about how we can test versions of our prototype, or other ideations, with the Fellows, while simultaneously building rapport and trust with them so that we can get more valuable feedback.”

— Martina Tan

--

--