CS 247I Class Final Reflection

Z Cinquini
Design for Understanding: CS 247i Fall 2019
4 min readDec 11, 2019

Designing for Understanding Complexity

Complexity is all around us — it’s just obscured by our own expertise.

Once we become familiar with a topic, we are able to see past its complications. This is what I found is our greatest challenge in helping people understand complexity; by virtue of our role as the explainers, it is assumed that we have a baseline level of familiarity with the material, unlike the people for which we are designing. CS 247I armed me with the knowledge to mitigate this discrepancy.

Personal reflection

User-centered design

It is impossible to create something that illustrates a complex topic without first knowing a bit about the people at whom the material is directed. Selecting a specific user base helps narrow down the set of initial assumptions you expect your users to bring to the table, allowing the designer to curate more effectively to this subset of people. Being aware of the level of expertise you expect the users to have can also help measure successes and guide your evaluations.

Evaluation

Gathering ongoing feedback about your successes and failures as a teaching tool is relatively straightforward; what’s harder to gauge is whether the feedback you’re receiving is genuine and time-appropriate. Without sufficient preparation, it’s easy to slip into leading questions or ask users directly questions like “why do you think that occurred?” which is likely to illicit snap judgements or outright false responses. When gathering feedback, it’s also crucial to be aware of the stage of the process. For example, it can be useful to have more a more open-ended structure in a lo-fi prototype test, but when moving into hi-fi prototypes, it may be more appropriate to have a more rigid format so that learning outcomes can be gauged consistently and without bias.

One example of a channel of feedback that we encountered that I feel has more potential was our in-class presentations. Since presentations were time-sensitive, often they could only scratch the surface of what the product was about. Thus, the types of feedback they elicited rarely stretched beyond a surface-level critique, which was particularly unhelpful when the presentation fell on the class period before the due date. I’m not immediately sure how to remedy this problem, but I feel that there is the potential to spend presentation time more effectively.

Perceived in-class presentation negative feedback loop

Money in Politics

Topic

The infectious enthusiasm of a teammate led me to join the Money in Politics team, and I haven’t looked back since. Though it was immediately clear that the world of campaign finance was far too dense for us to fully comprehend within the span of a quarter, narrowing our focus to specific aspects allowed us to become experts in small pieces of the convoluted puzzle. Now, I actually feel quite a bit more knowledgeable about Dark Money, and would feel comfortable participating in conversations that would have felt out of my element a few months ago.

Techniques

Perhaps it’s in the acronym, but the principles of Consistency, Repetition, Alignment, and Proximity (CRAP) will be what sticks with me after Stanford.

Other design classes had hammered home the importance of attention to alignment and Gestalt theories of proximity, but CS 247I helped me understand the learning potential associated with consistency and repetition (repetition especially eluded me for a while). In our final project, the implications of consistency became potent when dealing with a challenge of users not understanding when to play cards face-up or face-down; by redesigning the player mats to use consistent styling for each type of play area, the problem that plagued initial playtests virtually disappeared without fanfare. Feeling as if I had finally understood the power of consistency was a definite high point for me.

The usage of contrast on the card playing areas in Iteration A was largely arbitrary. In Iteration B, we associated contrast with whether or not cards should be placed face up or down, which virtually eliminated confusion.

Goals

A consistent refrain throughout our team’s entire journey was: “are we hitting our learning objectives?” More often than not, user testing revealed that we fell short, and pushed us to more iterations. One strategy that I think could have benefited our team more was using consistent quantitative assessments of our learning objectives and engagement throughout, so that we could benchmark successes and make more data-driven decisions. Obviously with relatively small sample sizes our data would be hardly statistically significant, but the extra data might have led us through our design journey more efficiently.

Thank you for a fantastic quarter!

Happy holidays,

Zack

--

--