TRI @ CHI 2023 — Medium Post

Toyota Research Institute
Toyota Research Institute
5 min readMay 4, 2023

The ACM CHI (pronounced ‘kai’) Conference on Human Factors in Computing Systems is the foremost event for Human-Computer Interaction (HCI) research. TRI was thrilled to be a contributing sponsor for CHI’s 2023 hybrid conference in Hamburg, Germany, April 23–28.

Matt Lee, TRI Staff Research Scientist presenting the ‘Understanding PHEVs’ talk

Building off of TRI’s successful showing at CHI 2022, our researchers at CHI 2023 presented talks on three full papers and a poster on one late-breaking work. One full paper won an 🏆️ Honorable Mention Award, which is only given to the top 5% of paper submissions.

At TRI, our research is focused on amplifying human ability and making our lives safer and more sustainable. In that same vein, our research accepted at CHI this year focused on how we can expand human understanding, leverage AI to help people make better decisions, and accelerate the shift to carbon neutrality. Here are the areas of research that we covered at the conference:

Main Conference

Save A Tree or 6 kg of CO2? Understanding Effective Carbon Footprint Interventions for Eco-Friendly Vehicular Choices

Vikram Mohanty, Alexandre L. S. Filipowicz, Nayeli Suseth Bravo, Scott Carter, David A. Shamma

🏆️ Honorable Mention Award

Visualizing the impact of ride options with CO2 by weight

There is a growing interest in eco-feedback interfaces that display environmental impact information for various products and services. In this full paper, we examine the effectiveness of different carbon-based eco-feedback interventions — including direct CO2 emissions, simpler heuristic interventions, and more relatable CO2-equivalent activities — in the context of personal transportation decisions. We explore the influence of emission information on vehicular choices and examine how displaying CO2 emissions information, CO2 equivalencies, and other data might help people choose eco-friendly ride-sharing options.

Our studies focused on how people navigated both ride-hailing and car-rental decisions. In both scenarios, participants picked between regular and eco-friendly options. Our surveys tested different equivalencies, social features, and valence-based interventions. We found that participants are more likely to choose green rides when presented with additional information about emissions; representing options with CO2 by weight was found to be the most effective. Furthermore, we found that information framing — be it individual or collective footprint and positive or negative valence — had an impact on participants’ choices.

Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments

Paul D. S. Fink, Velin Dimitrov, Hiroshi Yasuda, Tiffany L. Chen, Richard R. Corey, Nicholas A. Giudice, Emily Sarah Sumner

Testing mid-air gestures as controls for fully autonomous vehicles (FAVs)

This full paper focuses on the potential benefits of fully autonomous vehicles (FAVs) for people who are blind and visually impaired (BVI) and how FAVs could improve their independence and autonomy. We argue that BVI people will also desire some level of control over the FAVs they use, including personalizing the vehicle’s driving style and changing the route. To achieve this level of control, we propose the use of mid-air gestural systems, which can be performed without the guiding use of vision and offer significant hygienic advantages in the context of shared FAVs. We conducted a needs assessment and user study involving BVI participants to identify the types of vehicle control that are important to this demographic and the situational information necessary to be conveyed and to design a mid-air gestural system to promote multisensory control. The resulting experimental interface combines ultrasound-based haptic representations of the driving environment, queryable spatialized audio descriptions, and mid-air gestures to mediate between the two. The system is designed to serve both BVI people who have previously operated traditional vehicles as well as people who have never driven before, representing broad and inclusive usability across the spectrum of vision loss.

Understanding People’s Perception and Usage of Plug-in Electric Hybrids

Matthew L. Lee, Scott Carter, Rumen Iliev, Nayeli Suseth Bravo, Monica P. Van, Laurent Denoue, Everlyne Kimani, Alexandre L. S. Filipowicz, David A. Shamma, Kate A. Sieck, Candice Hogan, Charlene C. Wu

Reducing driving-related carbon emissions is crucial for meeting climate goals, and switching to battery electric vehicles (BEVs) is one solution. However, there are challenges to the widespread adoption of BEVs in the US, including high purchase prices, limited charging infrastructure, and range anxiety. Plug-in hybrid electric vehicles (PHEVs) are a viable alternative to BEVs, as they can reduce carbon emissions while still using an internal combustion engine (ICE) for longer trips. PHEVs also have smaller batteries than BEVs, making them cheaper and requiring fewer resources to produce. However, the effectiveness of PHEVs in reducing emissions depends on driver behavior, as they need to be charged regularly to electrically power a significant proportion of miles driven. In this full paper, we conducted a mixed-methods study to understand PHEV owners’ charging habits. We found that charging is well supported at home, but that away-from-home charging is challenging due to a lack of available chargers, broken chargers, hard-to-find chargers, and high costs. We tested a number of concepts to understand and address these issues. Overall, we found that while PHEVs have the potential to significantly reduce driving-related carbon emissions, further research and infrastructure improvements are needed to fully realize this potential.

Understanding People’s Perception and Usage of Plug-in Electric Hybrids
Full Video

CodeML: A Machine Learning-Assisted User Interface for Code Identification and Labeling

Francine Chen, Matthew K. Hong, Laurent Denoue, Kate S. Glazko, Emily Sarah Sumner, Yan-Ying Chen, Matthew Klenk

CodeML interface for identifying codes and labeling text snippets

Creating a fully labeled dataset from short free-text responses can be difficult and time-consuming. Past work has used a greedy method for coding text, but it can result in overlooked themes. In this late-breaking work, we explain our solution to this issue with the development of a new, interactive, non-greedy approach called CodeML that uses machine learning to assist human coders in identifying a code set. CodeML proposes groups of similar concepts as codes to reduce the possibility of overlooking themes and requires human input to guide the selection and definition of codes. Our evaluation shows that CodeML outperformed a popular commercial product in identifying codes at a finer level for a deeper understanding of the dataset.

Learn more about the Human-Centered AI team here: https://www.tri.global/our-work/human-centered-ai

--

--

Toyota Research Institute
Toyota Research Institute

Applied and forward-looking research to create a new world of mobility that's safe, reliable, accessible and pervasive.