Vestibular sensory substitution using tongue electrotactile display

 Yuri P. Danilov, Mitchel E. Tyler and Kurt A. Kaczmarek 

Sensory substitution

Sensory substitution systems provide their users with environmental information through a human sensory channel different from that normally used. For example, a person who is blind may use a long cane to detect obstacles while walking and Braille or raised-line graphics to read information normally received visually. A person who is deaf may read lips to understand speech. A person without vision or hearing may use a method called Tadoma, placing his or her hands over the face and neck of a speaker to understand speech [1]. Persons with an impaired vestibular (balance) system use their hands, not primarily for mechanical support, but to sense how they move relative to their environment. Electronic sensors and tactile (touch) displays enable more sophisticated applications for sensory substitution. In this chapter we will briefly review visual and auditory sensory substitution, as well as tactile feedback in robotic systems, followed by an extended discussion of vestibular sensory substitution. A more detailed discussion of vision substitution is provided in the chapter by Ptito in this book.
For the brain to correctly interpret information from a sensory substitution device, it is not necessary for the information to be presented in the same form as the natural sensory system. One needs only to accurately code action potentials in an alternate information channel. With training, the brain learns to appropriately interpret that information and utilize it to function as it would with data from the normal natural sense.
Reported attempts to present spatial visual information via tactile displays date from at least the early 1900s, with serious scientific study starting in the 1960s [2–4]. These systems operate by using an electronic camera or matrix of light sensors to control the stimulation intensity on a spatially-corresponding matrix of electrical or mechanical tactile stimulators on the surface of the skin. The user perceives tactile shapes on the skin having the same shape as the visual image recorded by the camera. Blind and blindfolded users are then able to identify simple objects in a high-contrast environment, and have reported visual concepts such as distal attribution (i.e., perceptually localizing the target object out in front of the camera, rather than on the tactile display proper) [5], looming, and perspective [6], and also optic flow phenomena [7].