University of Cincinnati (Ohio) philosophy and psychology graduate assistant Luis Favela studies how people perceive their environment, and how those perceptions inform their judgments. Inspired by the CDC prediction that more than 6 million Americans aged 40 and older will be affected by blindness or low vision by 2030, Favela wondered whether an infrared (IR) light-based handheld device, developed by University of Reading (Reading, Berkshire, England) cybernetics students Tom Froese and Adam Spiers1, could help such people.
|The Enactive Torch uses infrared sensors to detect objects in front of it and, upon detection, send a vibrational warning to the wearer. (Both photos courtesy of Colleen Kelley, University of Cincinnati)|
The device, called the Enactive Torch (ET), uses two nonlinear-response IR rangefinders (one is a Sharp GP2D12, which covers a range of 8–80 cm, and the other is a Sharp GP2Y0A02YK0F, which covers 20–150 cm) located at the front to detect distance information. The information is then transferred to a vibrational motor that is attached to the wearer's wrist and emits a vibration similar to a cellphone alert. This enables the user to feel whether something is in front of the device and then how near or far that something is based on the intensity of the vibrations, Favela told BioOptics World. The gentle buzz increases in intensity as the torch nears the object, letting the user make judgments about where to move based on a virtual touch.
|Luis Favela observes Amon as she uses the Enactive Torch during a demonstration of Favela's experiment.|
Favela conducted an experiment to determine the value of such a tool. He asked 27 participants with normal or corrected-to-normal vision to make perceptual judgments about their ability to pass through an opening a few feet in front of them without needing to shift their normal posture. He tested the judgments they made in three modes: using only their vision, using a cane while blindfolded, and using the ET while blindfolded. The idea was to compare judgments made with vision against those made by touch.
Favela thought that vision-based judgments would be the most accurate because vision tends to be most people's dominant perceptual modality. But the data revealed that all three modalities were equally accurate. "People can carry out actions just about to the same degree whether they're using their vision or their sense of touch. I was really surprised," he says.
Favela plans on additional experiments with the ET that require more complicated judgments, such as the ability to step over an obstacle or to climb stairs. He, Froese, and Spiers have also discussed the possibility of smaller versions of the ET, considering devices with other forms of feedback such as skin stretch feedback, he says.
Favela presented his research "Augmenting the Sensory Judgment Abilities of the Visually Impaired" at the American Psychological Association's (APA) annual convention, held Aug. 7-10, 2014, in Washington, DC.
1. T. Froese, M. McGann, W. Bigge, A. Spiers, and A. Seth, IEEE Trans. Haptics, 5, 365–375 (2012); doi:10.1109/ToH.2011.57.