Literature Review
A compilation of findings
Interface, Branden Hookway
Hookway explores how an interface is “a form of relating to technology… to be composed of the combined activities of human and machine” and not a “form of technology” in Interface. He investigates the relation of interfaces to a variety of theories (theories of agency, theories of control, etc…), explores the origin and history of interfaces, and addresses how interfaces serve as a site for “tacit or embodied intelligence.”
Hookway brings up questions on what exactly an interface is. For instance, how an interface is meant “to actively maintain, police, and draw on the separation… between two or more distinct entities, conditions, or states.” Determining the medium of the interface I will eventually make, will play a huge role just how that separation is maintained.
It is also brought up how an “interface is a reflection”; which is one consideration I will need to make in the future. This includes “what qualities” of humans I want to reflect in the context I am operating in.
Hookway also brings up that “the space between engagement and knowing is for the user the potential for anxiety and surprise.” Exactly how I engage users going through that space in such a vulnerable context will play a significant role in the success of whatever I design.
The book closes with a discussion of J.C.R. Licklider and the possibility for a “human-machine relationship as intimate as that between a fig tree and fig wasp,” further strengthening my desire to potentially create such a relationship. Hookway also brings up the “light pen” and its ability to allow for the “operator of a computer [to] ‘in general interact with it very much as he would with another engineer.’” This has made me think about what exactly my interface should resemble; a conversation with a relationship counselor or an entirely different paradigm.
100 Simple Secrets of Great Relationships, David Niven
Niven pulls from scientific studies on relationships to present principles that if implemented can lead to great relationships. These principles were largely supported through conversations with older individuals who emphasized to Niven that “‘It takes effort. It isn’t easy. But anyone can do it.’”
I largely read this book in order to pull potential strategies that I could implement in whatever I design. These strategies include:
- See Possibilities Where Others See Obstacles: “Constant attention to the weaknesses of any relationship will weaken it. Constant attention to the strengths of any relationship will strengthen it.”
- You’ll Forget the Disagreement but Remember the Disagreeing: “Regardless of the disagreement at hand, remember to always put the feelings of your partner ahead of the specific complaint because the feelings will linger long after the complaint is solved or forgotten.”
- Friends Speak From Experience — Their Own: “Value their friendship, but understand that their advice applies primarily to themselves.”
- You Don’t Have to See Eye to Eye on Everything: “respect for the other person’s perspective is far more important than constant agreement with it.”
- Think About Potential: “you must always keep part of your attention focused on hope, on the possibility that whatever difficulties arise today will be solved, forgotten, or at least less important in the future.”
- Limit Your Interest in the Past: “There is nothing you can do or say that will change the history of your partner, but by not harping on that history, you can make the future of your relationship emerge.”
- It’s for You — or It Isn’t: “We cannot live our life seeking the acceptance of others because doing so will compromise our ability to gain acceptance from the most important source: ourselves.”
Men Are from Mars, Women Are from Venus, John Gray
Gray provides “a practical guide for improving communication and getting what you want in your relationship.” He utilizes his experience counseling couples to describe “how men and women are different” and how an awareness of those differences can dramatically improve how a couple’s communication.
Similar to the 100 Simple Secrets of Great Relationships, I read the book to pull specific points that could influence the design of conversational interfaces. One point introduced early in the book is that, “We mistakenly assume that if our partners love us they will react and behave in certain ways — the ways we react and behave when we love someone.” A consideration will need to be made to see if this extends to the specific interactions a user takes part in (should a woman be introduced to a topic/theme differently than a man). Points introduced in the book that can be implemented within an artifact are:
- “Men mistakenly offer solutions and invalidate feelings while woman offer unsolicited advice and direction.”
- Strategies for the opposite sex to get what they want when their partner copes with stress (when men pull away and woman “talk about what’s bothering them”).
- How men and woman differ in regards to motivation (a man’s motivation is derived from when they feel needed, while a woman’s motivation is derived from feeling cherished.)
- The differences between the sexes needs for intimacy (the rhythmic rise and fall of a woman’s loving attitude, a man’s need to pull away at times)
- The difference in love men and woman need (men desire a trusting, appreciative, and accepting love, while a woman desires a caring understanding, and respectful love).
- The differences in how men and woman “keep score” (woman value little gifts as much as big gifts, while men give points when they feel appreciated).
The Love Fix: Repair and Restore Your Relationship Right Now, Tara Fields
Tara Fields, a marriage and family therapist by trade, uses her years of experience counseling hundreds of clients through frustration and heartbreak to help readers understand “how they handle their problems” and how to make “some pretty forward changes to how they communicate.” Fields focuses on “the five most common fighting patterns” and how to fight from falling into those patterns.
Again similar to the 100 Simple Secrets of Great Relationships and Men Are from Mars, Women Are from Venus, I read Field’s book to pull specific strategies that could be embedded into whatever I design.
An early point Fields makes in her book is that “if you keep having the same argument over and over, consider it a sign that something similar may be going on in your relationship. Consider it a clue that you’re not really fighting about what you’re fighting about.” This will be a very important consideration when analyzing conversations between couples.
The five conflict loops that could be potentially built into the artifact are:
- The Parent Trap — Equal Partnership: When “one partner acts like the parent by checking up on the other’s performance or (mis)behaviors, while the other throws responsibility out the window — to an equal partnership between adults and lovers.”
- Come Close, Go Away — Interdependent Relationship: when “you’re gripping one end of the rope, and your partner is gripping the other… and you are both pulling as hard as you can, straining your muscles, your minds, and your emotions to knock the other person right on his or her tush and drag him or her” to your side.
- The Blame Game and the Shame Spiral — Ownership and Respect: when you “angrily blame or shame your partner” and try to “take power.”
- Testing, Testing, 1, 2, 3 — Profound Trust: “When we test our partners” and ask “Do you love me? Am I important to you?… How can I push things with money or sexual issues or commitment?” and when “the goal of testing isn’t to get left… Usually, the goal is to find love or prove love or solidify love.”
- Grow Apart — Grow Together: When you don’t embrace change or reframe that change and instead “end up stuck.”
Affective Computing, Rosalind W. Picard
In Affective Computing, Picard “proposes that we give computers the ability to recognize, express, and in some cases, “have” emotions.” The book includes “the intellectual framework for affective computing” and “descriptions of tools and progress” in the area of affective computing (circa. 2000).
Picard introduces different form of emotions that could be potentially trigger different actions, these include:
- Facial Expression
- Vocal Intonation
- Motor Forms of Expression
- Posture
- Pupilary dilation
- Termperature
- Blood Pressure
- Electrodermal Response
It is also mentioned that “one of the outstanding problems in trying to recognize emotions is that different individuals may express the same emotion differently.” This is something that will need to be dealt with to effectively complete my thesis.
Picard also introduces an interesting point that “it is possible for computers to recognize affective states that do not presently have names”. Being able to recognize patterns in a couple’s communication will be key in allowing for more effective conversations.
The Yerkes-Dodson curve is also discussed and how it might work in the context of “mental workouts” and how “it is also reasonable to expect to find a congnitive-affective analogy to the physical fitness pattern of warming up, exerting effort, then cooling down.” This seems like it could be particularly relevant for high stake conversations that take place more regularly in the context of a relationship.
Picard closes the book by briefly discussing how computers could “have more access to our physical forms of expression than most people with whom we come into contact”, encrypt and wirelessly transmit your mood to your spouse, and how in some “there are good reasons not to broadcast your affective patterns to the world.” I will need to deeply consider what information is too sensitive to share with a spouse and what is not, while also making considerations that not all couples are the same.
Emotional Design, Donald A. Norman
In Emotional Design, Norman explores how “every time we encounter an object, our reaction is determined not only by how well t works, but by how good it looks to use, and by the self-image, loyalty, and even nostalgia it evokes in us” and how “emotion also plays a big part in a designer’s work.”
One particular point I can keep in mind when going through my thesis are the three levels of design Norman introduces:
- Visceral Design: “what nature does. We humans evolved to coexist in the environment of other humans, animals, plants, landscapes, weather, and other phenomena. As a result, we are exquisitely tuned to receive powerful emotional signals from the environment that get interpreted automatically at the visceral level.”
- Behavioral Design: “all about use. Appearance doesn’t matter. Rationale doesn’t matter. Performance does. This is the aspect of design that practitioners in the usability community focus upon.”
- Reflective Design: “the meaning of things, the personal remembrances something evokes” and the “self-image and the message a product sends to others”
Norman also introduces Khaslavsky and Shedroff ‘s concept of seduction or why “a design maintain[s] its effectiveness even after long acquaintance” or the “initial attraction.” They note that the steps of seduction are “enticement, relationship, and fulfillment: make an emotional promise, continually fulfill the promise, and end the experience in a memorable way.” I will have to ensure that whatever I design, “seduces” users to continuously come back and have effective conversations with their partners.
Superintelligence, Nick Bostrom
Bostrom uses Superintelligence to “try to understand the challenge presented by the prospect of superintelligence, and how we might best respond.” In the book, he reviews past developments, potential paths to superintelligence, and potential effects of singularity.
One part I found particularly interesting (at the same time, I am not sure how I will use it) is his section on the paths to superintelligence. Bostrom talks about artificial intelligence, whole brain emulation, biological cognition, and brain-computer interfaces, and networks and organizations. I was a little bit sad when I read that “brain — computer interfaces look unlikely as a source of superintelligence.”
Computer Power and Human Reason: From Judgment to Calculation, Joseph Weizenbaum
In Computer Power and Human Reason: From Judgment to Calculation, Joseph Weizenbaum discusses the potential negative implications of artificially intelligent systems becoming more and more important part of our lives. He specifically reflects on his experience designing ELIZA a conversational interface that allowed “human correspondents” (Weizenbaum, 1966) to communicate through a typewriter to a simulated psychologist.
I want to specifically point our Joseph Weizenbaum’s evolving view of his creation. As years passed, Weizenbaum’s view of ELIZA evolved from a state of pride to a state of disgust. He saw how “the computer’s intellectual feats … [were explained by users] by bringing to bear the single analogy available to them, that is, their model of their own capacity to think,” (Weizenbaum, 1976, p. 10) eventually leading these same users to think of ELIZA as a personand not as a system. He also saw how ELIZA revealed a “tendency to treat responsive computer programs as more intelligent than they really are” (Prujit, 2004, p. 521). This made him “realize that this newly created reality was and remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” (Weizenbaum, 1976, p. 25) Ultimately leading him to believe “that, however intelligent machines may be made to be, there are some acts of thought that ought to be attempted only by humans.” (Weizenbaum, 1976, p. 13).
Such issues must be considered in the creation of a conversational interface, for “man’s capacity to manipulate symbols, his very ability to think, is inextricably interwoven with his linguistic abilities.” (Weizenbaum, 1976, p. 184) Conversational interfaces should not limit a human’s thought and reality, but provide an interface for the advancement of thought.
Man-Computer Symbiosis, J.C.R. Licklider
In Man-Computer Symbiosis, J.C.R. Licklider describes what he calls a “close coupling between the human and the electronic members of the partnership.” That aims for:
- computers to facilitate formative “thinking as they now facilitate the solution of formulated problems”
- computers to “enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence”
He also notes that “in the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and
scientific thinking.”
One particular aspect I want to keep in mind is the line “Man-computer symbiosis is probably not the ultimate paradigm for complex technological systems” and that “there will never nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association.” I would argue that this interim would hopefully be never ending. I acknowledge that in some future situations machines will be limited if humans are involved, but believe there will always be scenarios where both humans and machines benefit through “intimate association.”
Visual Perception, Vicki Bruce, Patrick R. Green, Mark A. Georgeson
Visual Perception presents “a wide range of recent and theoretical developments in the field of visual perception.” One chapter in this book is on the Perception of the Social World, which looks into how vision is important in our navigation of the world.
A couple interesting findings from this book include:
- How “patterns of relative motion between simple shapes can, in the absence of any detailed information about form, give rise to perception of some properties of the social world.” One can see how this insight has been applied to current interfaces with AI in the sense of pulsating forms, I wonder how this could applied in the context of two people communicating with the assistance of an AI. Would individuals want such AI to have a form and if so at what level, more of an environmental or individual level? This also makes me think about when I’m texting someone back and forth and know that they are active in the conversation, what kind of elements could be put in place so that I would have a better sense of the other’s state of mind.
- How “the perception of gaze has been seen to play a fundamental role in further cognitive activities. First, the detection of another’s direction of attention allows the establishment of joint attention between two people, which may play a key role in conversation” and that “quite specific information about faces can be gleaned simply from the pattern of transformations present, without any need for information about the form of the face.” This begs the question, that if two people are taking part in a digital conversation with each other, what elements could be added so that the other could better understand the other’s state of mind. I don’t exactly know if this will be relevant for me, but this is an interesting question.
The Image of the City, Kevin Lynch
Kevin Lynch explores the visual form of cities (Boston, Jersey City, and Los Angeles), while offering “some first principles of city design.” He takes users through each city, identifying paths or “channels along which the observer moves”, edges or “linear elements not used or considered as paths by the observer”, districts or “medium-to-large sections of the city”, nodes or “strategic points of the city, and landmarks or “external point-references”.
One can pull out a number of connections from Lynch’s study of cities to the design of digital products. For instance, Lynch notes that “the paths, the network of habitual or potential lines of movement through the urban complex, are the most potent means by which the whole can be ordered.” In a digital product, clear steps are necessary. As a user, I want to know after I have completed this one task what to do next. He also notes a city’s “form must be somewhat noncommittal plastic to the purposes and perceptions of its citizens.” One can see how this is relevant to bots/AI. Both in the acknowledgment that ever user is different and that there should be an acceptable amount of flexibility built into models to handle the differences between people.
The Affect Dilemma for Artificial Agents: Should We Develop Affective Artificial Agents?, Matthias Scheutz
Matthias Scheutz’s piece looks at affect and how we take it for granted in our interactions with other human beings. It then looks at how today’s artificial agents are not capable of affect and whether “we should nevertheless develop affective artificial agents; in fact, we might be morally obligated to do so if they end up being the lesser evil compared to (complex) artificial agents without affect.”
One interesting takeaway is Scheutz’s points around the ethics of building agents with affect. For instance, one could “understand the causal potential of different affective states in order to prevent false attributions with all the potentially ensuing consequences about alleged ‘emotional agents.’” The mention of “false attributions” is particularly relevant for me in context of two partners conversing. Signaling what is a true signal or false signal will be important when detecting emotion or signals in a conversation.
The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present, Caroline Bassett
Caroline Bassett explores ELIZA and how ELIZA “illustrates ways in which contemporary anxieties and debates over machine smartness connect to earlier formations.” It pays particular attention to “the ‘machinic therapeutic’ condition we find ourselves in” and how that model is applied to more current instantiations of artificial intelligent we see today.
Bassett talks in depth about how “ELIZA might be a hopeful monster precisely because, as a Rogerian Machine, she appeared impossible, but worked.” Worked in the sense, that people were able to have a substantial conversation with it without realizing it was an agent. She also mentions that it provokes “a re-think of polemical positions around the computational therapeutic.” Maybe, ELIZA is evidence that an artificial agent does not need to be perfect at all (in terms of natural language processing) and instead needs to replicate the naturalness/niceties/manners of human conversation.
When the Interface Is a Face, Lee Sproull, Mani Subramani, Sara Kiesler, Janet H. Walker, and Keith Waters
This paper discusses results of a study looking at how individuals behave differently when “designers introduce more human-like qualities into computer interfaces.” To do that it introduced an interface and then asked a number of multiple choice questions that a counselor could possibly ask. Based on the answers the face would react, they also built an interfaces without the face to compare the responses they received from participants using both interfaces.
A couple things to takeaway from this paper, include:
- Like expected, individuals “attribute some personality attributes to the faces differently than to a text display.”
- One thing, I have not paid particular attention to so far is the differences based on one’s sex. The paper notes that “one recent study women thought it was less appropriate than men did for computers to take on roles entailing personal interaction, such as boss or psychiatrist.” This comment makes me think about how appropriate different agents whether they be different in terms of their form or communication methods might be helpful. Or if that is already achieved in some sense with machine learning and its ability to personalize agents/interfaces.
- The authors end on the point that “the prospect of people putting their best foot forward for their computer is an odd one indeed.” This is very interesting and I wonder if it still stands up, despite the face experiment taking place more than 20 years ago. Has the novelty of computers worn off that this is no longer true?

