A vibrating steering wheel is an effective way to keep a driver’s eyes safely on the road by providing an additional means to convey directions from a car’s navigation system, researchers at Carnegie Mellon University and AT&T Labs have shown. The study, one of the first to evaluate combinations of audio, visual and haptic feedback for route guidance, found that younger drivers in particular were less distracted by a navigation system’s display screen when they received haptic feedback from the vibrating steering wheel. For elder drivers, the haptic feedback reinforced the auditory cues they normally prefer. Though the haptic steering wheel generally improved driver performance and safety, the study findings suggest that simply giving the driver additional sensory inputs isn’t always optimal. That’s particularly the case for older drivers because the additional sensory feedback can strain the brain’s capacity to process it. “Our findings suggest that, as navigation systems become more elaborate, it would be best to personalize the sensory feedback system based, at least in part, on the driver’s age,” said SeungJun Kim, systems scientist in Carnegie Mellon’s Human-Computer Interaction Institute (HCII). The findings will be presented June 21 at the International Conference on Pervasive Computing in Newcastle, England. Vibrating steering wheels already are used by some car makers to alert drivers to such things as road hazards. But the haptic steering wheel under development by AT&T is capable of unusually nuanced pulsations and thus can convey more information. Twenty actuators on the rim of the AT&T wheel can be fired in any order. In this study, firing them in a clockwise sequence told a driver to turn right, while a counterclockwise sequence signaled a left turn. “By using these types of vibration cues, we are taking advantage of what people are already familiar with, making them easier to learn,” explained Kevin A. Li, a researcher with AT&T’s user interface group in Florham Park, NJ. Kim and fellow HCII scientists have developed methods for measuring the performance, attentiveness and cognitive load of drivers that involve a suite of sensors. For this study, they added the experimental AT&T steering wheel to their driving simulator. Part of a research thrust of the National Science Foundation-sponsored Quality of Life Technology Center, the researchers were particularly interested in learning whether multi-modal feedback would improve the driving performance of elderly drivers. The number of drivers over the age of 65 is rapidly growing; improving the performance of older drivers despite progressive decay in their vision, hearing and general mobility can help maintain their mobility and independence. Subjects of the study included 16 drivers ages 16-36 and 17 over the age of 65. In the HCII simulator, these people drove a course that included various traffic lights, stop signs and pedestrians while the researchers monitored their heart rate, pupil size, blink rate, brain wave activity and other measures of attention and cognitive load. The researchers found that the proportion of time that a driver’s eyes were off of the road was significantly less with the combination of auditory and haptic feedback than with the audio and visual feedback typical of most conventional GPS systems – 4 percent less for elder drivers and 9 percent less for younger drivers. Combining all three modalities – audio, visual and haptic – significantly reduced eye-off-the-road time for the younger drivers, but not the older drivers. Kim said this may have to do with driver preference; self-reports showed older drivers favored audio feedback while younger drivers relied more on visual feedback. But the researchers also found that combining all three modalities didn’t reduce the cognitive workload of older drivers, a result that was in contrast to younger drivers. They concluded designers of navigation systems for older drivers may need to concentrate on reducing the driver’s cognitive burden rather than resolving issues regarding divided attention. “We are very excited about the benefits of adding haptic feedback to traditional audio-visual interfaces,” said Anind K. Dey, associate professor in HCII. “In combination with our ability to measure cognitive load, we can not only design interfaces that people like and make them more efficient, but that also allow them to more easily focus on their task at hand.” In addition to Dey, Kim and Li, the research team included Jodi Forlizzi, associate professor in HCII, and Jin-Hyuk Hong, a post-doctoral researcher in HCII. General Motors, the National Science Foundation and the Quality of Life Technology Center sponsored this study. The Human-Computer Interaction Institute is part of Carnegie Mellon’s acclaimed School of Computer Science. Follow the school on Twitter @SCSatCMU.
Posted by Janine E. Mooney, Editor
April 24, 2012