This literal translation of Turkish proverb about fortune-telling offers an insight into how some of us like to hear desired things about our futures although we already know that they are ungrounded.
While searching for inspiration for my thesis at Carnegie Mellon around trust and conversational interfaces, with the help of my primary thesis advisor, Dr Dan Lockton, I came across the notion of “cold reading”.
Cold reading refers to a set of techniques that people such as scam artists, mediums, fortune-tellers, palm readers, and illusionists use to convince their clients that they know much more about them than they do. An experienced cold-reader can gain a lot of information about their client by just analyzing their body language, age, clothing, gender, religion, race or ethnicity, place of origin, or simply how they speak. The cold readers will do some high probability guesses to see which ones are true while their clients also revealing more bits of information about themselves. Then they use this new information to guide their reading by taking advantage of their clients’ confirmation biases. As a reader proposes a “possible future”, their clients’ interpret and favor the readings in a way that confirm their beliefs or living ideals. …
Before coming to Carnegie Mellon to pursue a course in interaction design, in my master’s at METU Dept. of Industrial Design, I’ve started to explore the answers to a question that I found very important in the context of “smart” physical products. While smartness is a subjective term, the question, “why does -smart- products have adoption problems among users?” made me discover many different problem spaces related with a variety of disciplines.
While doing a literature review for my open-ended evaluative user experience research at METU, I find one term, trust, being heavily related to smart products’ or in other words autonomous products’ adoption. Then I decided to explore the relationship between fields like autonomy, trust, technology adoption, and aesthetics of interaction in the domain of industrial design. …
I submitted this thesis proposal to our thesis coordinator originally in May 2017 and since that time, I’m constantly revising my research territory and questions. You will be able to find the latest revisions to my thesis proposal in the changelog section.
Thanks for stopping by!
If you are interested in contributing to my research, don’t hesitate to contact me from firstname.lastname@example.org. I’m always interested in being distracted by learning about related works, research on human-machine trust. Nowadays, I’m exploring a lot on conversational interfaces, algorithmic experiences, and trust.
v0.1 May 2017 — Submission without references
Our lives are getting connected through technology, as we get used to living with the efficiency and ease-of-use that the artificial intelligence enables. The computerized decision-making processes powered by algorithms, computer programs for automated problem-solving tasks, are becoming even more ubiquitous in our everyday lives. As users, we often interact with these algorithms in the forms of autonomous intelligent agents such as a self-driving car, an instant-credit decision system or a connected health management system. When the amount of information that we provide to these agents and the complexity of the task they are trying to achieve increases, they transform into black boxes that we, users or even their creators don’t know how they make decisions on behalf of humans. Since we don’t know how they work, we often hesitate to trust these agents to use them. In my thesis, I am planning to explore the relationship between user’s trust and black box algorithms in the context of interaction design. By designing “trustworthy” interactions for a specific autonomous intelligent agent domain, my goal is to provide a design lens to the explanation of how algorithms works, and decide what is good for the user, through a research-through-design methodology. …