Intelligent technologies have begun to affect nearly every corner of our lives. At the same time, people 65 years and older have become the fastest-growing age group worldwide. While there are many benefits of a future enabled by smart tools, older adults have typically been left out of the product design conversation, which has not taken into account their capabilities, limitations, expectations, and usability issues. That exclusion is compromising their potential to use these advances to extend their capabilities, improve safety, reduce workload, and enhance convenience.
The COVID-19 pandemic is highlighting this gap and the need to close it. For example, people with underlying health conditions are reported to be more at risk for contracting the novel coronavirus, and older adults have been strongly encouraged to stay at home. This restriction significantly constrains their mobility, autonomy and social interactions, and may accelerate health challenges through social isolation and depression.
During this difficult time, technology can support activities such as: remote contact with physicians via telemedicine and uploads of health data from wearables; contact tracing; real-time access to information about the crisis via wirelessly connected laptops, mobile phones and tablets; and communication with relatives and friends during self-quarantine or hospitalization.
But many older adults are not sure how and where to start when it comes to using new technologies. For example, my students and I help older individuals solve technological problems they experience, as part of the Tech Teams community service program (hosted by Purdue’s Center on Aging and the Life Course). During one of the sessions, a senior citizen in her 70s told me, “Technology has just completely passed us by, but we want to keep up with it.”
Older adults tell us that they want any device in their work, home, leisure, health and transportation settings to be useful in helping them to accomplish a task, as well as easy to use. Design strategies that can help them overcome challenges include: presenting information redundantly, using more than one sensory channel; minimizing the number of steps required to complete a task while using a device; achieving consistency throughout a system; enabling adaptive features; and involving older adults in the design process via participatory design.
Our NHanCE (Next-generation Human-systems and Cognitive Engineering) Research Lab is working to understand how users — primarily older adults –interact with next-generation tech in a wide range of complex transportation, work and home environments, like driving, aviation, manufacturing and healthcare, and to enhance their performance through interface (re)design.
Our primary interest is in the autonomous, or self-driving, vehicle environment. The testbed we use for driving-related studies is a simulator, called miniSim™, developed by the National Advanced Driving Simulator at the University of Iowa. This system consists of three 48-inch monitors, a center dashboard, a steering wheel and foot pedals, along with a four-camera video capture system that enables us to record the in-vehicle behavior of participants. We also have a remote eye-tracking system to trace the eye movements of drivers during simulations.
Autonomous driving has the potential to revolutionize transportation for older adults, enabling them to maintain their independence. But for at least the next decade, only intermediate levels of vehicle automation are expected to be widely available. These semi-autonomous systems will have limited functionality and can occasionally malfunction, requiring manual intervention. For example, active lane-keeping systems use sensors to detect and follow lane markers. But when lane dividers are not present, this feature cannot operate and will need a driver to take control.
Age-related perceptual and cognitive challenges may decrease older adults’ ability to quickly and successfully recover control. Our lab is capturing pre- and post-takeover metrics to quantify how well senior drivers resume control of an autonomous driving system when presented with various combinations of visual, auditory and tactile “takeover” warnings, and lead-time notifications. This data will help to guide design decisions for in-vehicle warning systems and displays.
Overall, we are trying to quantify the process that older adults undergo when learning a new, complex task. We aim to develop breakthrough algorithms for measuring the influence of cognitive, physical, and non-chronological age factors that can predict older adults’ task performance with technology better than their straightforward chronological age.
COVID-19 is accelerating the rate at which we seek to achieve design inclusivity and make technology more accessible to a wide range of demographics. Also, as a result of this pandemic, we (as researchers and designers) are being pushed to think of new and creative non-contact ways to collect critical data needed to inform the design of next-generation technologies.
In the United States, there were 43 million older adults in 2012, and by 2050, the U.S. Census Bureau expects this number to nearly double to 84 million. This population trend means older adults will encounter and need to interact with increasingly intelligent technologies throughout later stages of life.
My hope is that we can quickly identify the barriers that older adults face in learning and using technology, to reduce the generational digital divide and give them a higher quality of life.
Brandon Pitts, PhD
Assistant Professor of Industrial Engineering
College of Engineering, Purdue University