A nudge leverages people’s cognitive biases to help them make better decisions. For example, to increase pension saving rates in the UK, the government-mandated private companies to automatically enrol its employees to its pension scheme. Contributions are deducted from the employee’s salary unless they opt out. This shift in policy leverages the status-quo bias, our tendency to keep the current state of affairs, in order to encourage people to save for their retirement.
Thaler and Sunstein define a nudge as “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly…
I admit I take pictures of my food. I take lots at home whenever I put a meal together. However, I avoid taking them when I’m out with acquaintances or with that friend who insists that foodstagramming could be a sign of mental illness. I adjust my behavior depending on who might be judging my actions — what accounts for the difference is social acceptance or the absence of social disapproval.
In a recent review, Koelle and colleagues argue that social acceptability is integral to designing user interfaces as our interactions with them increasingly happen in a social context. The…
As more users adopt virtual reality technology, the potential risks to human well-being, privacy, and security becomes increasingly important. At the forefront of mitigating these risks are VR designers and developers who navigate the field using their own moral compasses.
In an exploratory study, developers expressed the need for standards and ethical guidance. In response, researchers led a co-design of a code of ethics together with VR developers. The researchers recommended initial guidelines which developers themselves expanded into the following code of ethics.
Professionals involved in the development of VR experiences have the duty to:
Ensure that the intensity of…
Read more in Planeta Chatbot : todo sobre los Chat bots, Voice apps e Inteligencia Artificial · 5 min read
When developing for virtual reality, designers focus on providing an immersive experience while the user is on the application. However, to provide a great overall user experience, designers should also pay attention to the moment of exit — the part when the experience ends and the users take off their headsets.
Designing exits to VR experiences can be leveraged to achieve desirable effects. For example, exits could be designed to minimize shocks by easing the return to the real world.
Knibbe and colleagues conducted a user research to explore the design space for VR exits. …
The most annoying experience with voice user interfaces is when they don’t understand what you are saying. What’s worse is when they don’t give you any clue on what went wrong.
Whenever there’s a failed exchange of information, virtual assistants would often say, “Sorry, I don’t know.” The burden of repairing communication is passed on to the user, as affirmed in a recent ethnographic study. In the said study, researchers documented how adults apply several strategies to make successful interactions with their smart speakers.
If adults are having difficulties, what more can we expect with children?
What’s the current state of artificial intelligence when applied to user interfaces? What are the common problems and how can we do better?
To find out the answer, researchers from LMU Munich analyzed a total of 35,488 user reviews on these intelligent user interface features. For their research, they picked three representative AI-infused interfaces — the Facebook newsfeed, recommendations by Netflix, and route planning with Google Maps.
Thirty five thousand is a lot of reviews so they used statistical topic modeling before doing the manual analysis. …
I read Gray and Chivukula’s work on Ethical Mediation in UX Practice and I thought I’d share the insights in their research.
The key idea is mediation; researchers frame ethical decision-making as an interplay among three mediators — individual practices, organizational practices, and applied ethics. They argue that ethics is practice-led, that designers develop their ethical perspectives as they encounter various work situations. Although preliminary, their case studies with three design professionals illustrate some of the dynamics among mediators.
A designer’s work is subject to three sets of knowledge and practices:
What makes conversation good?
It’s important to keep this question in mind when designing for voice assistants like Google Home, Alexa, Siri, and Cortana. Design heuristics for voice interaction recommend having a match between the system and the real world. This entails learning from how people would normally communicate with each other and applying the extracted insights to design.
To list the qualities of good conversation, Clark and colleagues conducted user research by interviewing people about their views on human-human and human-agent communication. Inductive thematic analysis reveals four attributes that are essential to good conversations. …
Unlike other applications, those infused with artificial intelligence or AI are inconsistent because they are continuously learning. Left to their own devices, AI could learn social bias from human-generated data. What’s worse is when it reinforces social bias and promotes it to other people. For example, the dating app Coffee Meets Bagel tended to recommend people of the same ethnicity even to users who did not indicate any preferences.
Based on research by Hutson and colleagues on debiasing intimate platforms, I want to share how to mitigate social bias in a popular kind of AI-infused product: dating apps.
Bridging research to practice, one article at a time. HCI researcher turned IT professional. Writes UX insights and personal essays.