Creating an AI Experience Using 9 IBM Design Thinking Exercises

In the past few months, I’ve had the chance to work with a group of talented researchers, designers, and engineers to construct an AI tool to help augment child speech acquisition. We utilized a framework for design thinking, created by IBM. This is an AI specific framework that is undergoing constant evaluation and re-design. In this article, I talk about my experience with the current framework and some of the changes that are currently underway.

A key thing to know if you are reading this article, is that a lot of work happened before we jumped into these exercises. We did primary and secondary research, pitched and re-pitched ideas, and more. Before kicking off these exercises, our group also utilized the IBM design thinker practitioner short course — this is a free course for anyone with an .edu account and was necessary to align our team before we went through a nine step design thinking framework.

Tasks including:

  1. AI User profile
  2. User journey
  3. Framing
  4. Intents
  5. Jobs to Be Done
  6. AI value map
  7. Big ideas
  8. To-be Journey
  9. Prioritization
  10. AI hypotheses

If you’re a user researcher or designer, you’re likely familiar with almost all of these tasks. The main difference is that the provided framework enables us to continuously consider the value of AI — does it do something faster and better than humanly possible in an ethical way?

Overall, this framework certainly helped in that regard. We were constantly brought back to topics of data privacy and control. We questioned and requestioned the use of relational, anthropomorphic cognitive experiences (which IEEE and BSI ethical standards have a critical eye on at the moment). We also were grounded by the fact that this framework evolved from traditional design thinking exercises, which had the advantage of keeping the focus on the user (their intents, needs, jobs… and the value we can add).

We are happy to share our Mural board below. You can also read about the entire process and the product here.

However, at the end of these tasks our team wasn’t 100% aligned: we’d opened many doors and pulled out all the ideas, but weren’t completely able to organize everything, despite aligning our team pretty well before beginning the tasks. There are many factors that can play into this such as diversity in a team and having an incredibly complex topic that takes longer to tackle. However, I think further refinement of tasks done before and during a design thinking workshop would help. I suggest…

  1. Alignment on the definition of AI before beginning. Are we talking the underlying ML systems or the front end user experience? I think this was a large tug-of-war between our technical and big idea group members (though not in a bad way).
  2. Knowing some related work behind the design thinking exercises in addition to previous work on AI design. One specific model that came to mind was the diffusion of innovations model, which profiles different types of users who adopt technologies. Other useful models to know include Knapp’s relationship escalation model, uncanny valley studies, and best practices in AI design presented by IEEE and BSI. This could be a part of the literature review that takes place before the 9 exercises.

Jennifer Sukis is already ten steps ahead of us, and is redesigning these exercises as I write. Her revised draft brings in important AI issues earlier in the process, such as considering data streams, risks, and prioritization of the product versus the user. You can check back here soon for a link to this new framework!

--

--