Why exploring Design & Trust? and Thesis Kick-off
On My Motivations and My Initial Research Progress
Before coming to Carnegie Mellon to pursue a course in interaction design, in my master’s at METU Dept. of Industrial Design, I’ve started to explore the answers to a question that I found very important in the context of “smart” physical products. While smartness is a subjective term, the question, “why does -smart- products have adoption problems among users?” made me discover many different problem spaces related with a variety of disciplines.
While doing a literature review for my open-ended evaluative user experience research at METU, I find one term, trust, being heavily related to smart products’ or in other words autonomous products’ adoption. Then I decided to explore the relationship between fields like autonomy, trust, technology adoption, and aesthetics of interaction in the domain of industrial design.
Before I leave METU for Carnegie Mellon, I delivered an evaluative UX research report to a leading Turkish company that sponsored my thesis at METU, which provided me a very interesting case to study trust with an in-market ‘smart’ product (currently under NDA). Due to the intense agenda after coming to the US, I haven’t been able to find a time to translate my findings to a research paper, but I’m planning to publish it in Winter 2018.
A reason that I decided to go with Carnegie Mellon’s Master of Design program was its applied research ethos. I knew that I wanted to keep working on the relationship between trust and design and a year in the program helped me to explore different relationships that trust plays a role in the big domain of digital product design.
In May 2017, I wrote my thesis proposal on how we can design for trust to open the black box of algorithms of the autonomous agents under the guidance of my primary thesis advisor Dr Dan Lockton. It is ambitious for me to work in an area that I read a lot but haven’t practiced as a designer. But again, I think that’s the point of these research-driven projects. Writing a thesis is very similar to writing a book and I believe, while introducing others to a concept, the author learns a lot about the topic and the domain.
For this semester, one of my immediate goals was to scope down what I meant by “autonomous agents” in my thesis proposal from previous semester as it can refer to every possible piece of software and interface that have an algorithm such as an autonomous vehicle, a chatbot, an IoT assistant, a social media platform’s algorithm, a physical humanoid robot or even an elevator.
While I started my secondary research journey on exploring concepts such as black box algorithms, human-machine trust, algorithm awareness, fairness, and responsibility, I quickly scoped down my research on a hot topic, conversational interfaces that human-machine trust is a big determining factor for a successful two-way relationship.
Along the semester, I learned many different approaches from Dan, my thesis advisor, that can be relevant to my thesis. So far, I explored “deception” as a research tool in perceptual HCI research, some very interesting persuasion theories that are also relevant to human perception research including the rules of cold reading, and so on… Although I decided to move on from deception, as it is a very tricky concept that may have some serious implications in human-centered design research, I will share two design artifacts that I’ve started to develop during that time in my next article:
- A prototype of a recommendation chatbot to be used as a research tool that intentionally gives false promises to its users to teach them about their excessive trust level against malicious chatbots. It roleplays a malicious bot and ask users to share their location data and (pseudo) facebook profile-data with the promise of finding the best coffee/food place nearby and finding the most popular places among users’ friends. It is aim is to find out if users share their data with such a bot without providing any evidence of ability by over trusting it.
a. Can I measure users’ level of trust rather than asking them?
b. What would be the nominal level of user’s trust for a simple personalized recommendation CUI that uses personal data such as location, social media profile data and therefore asks user to share them? In other words, how easily users trust the system to give their data to the system?
c. What do users think about after their interaction with a system that teaches them about their trust level? What are their reactions? Why?
d. What do users feel when deceived even though it is for their good?
- A set of method cards for interaction designers that introduces the physiological techniques that pseudo-scientists such as palm-readers, fortune-tellers use to make their ‘clients’ to believe them. The aim of the cards is to make designers to find real-life examples of the interfaces that may unconsciously design a interaction pattern that works in similar ways, and think critically about the trade-offs and implications of using such patterns in the domain of product design.
a. Does cold-reading techniques apply to interaction design?
b. Does learning about cold-reading techniques make designers to question their design decision-making critically?
c. How does designer’s approach to such techniques? In the spectrum between “dark patterns” and “white patterns”, where does design patterns that have similarities between cold-reading techniques locate?
Thanks for stopping by!
If you are interested in contributing to my research, don’t hesitate to contact me from email@example.com. I’m always interested in being distracted by learning about related works, research on human-machine trust. Nowadays, I’m exploring a lot on conversational interfaces, algorithmic experiences, and trust.