We know that our senses do not operate in isolation or in parallel, but in concert: What we sense through touch is informed by what we see ourselves touch; what we see is shaped by what we hear in the same visual location. Using sensory augmentation it is possible to explore the potential of the human mind, which in turns will creates new potential for art and design.
Here we measure the process of constructing perception by replacing one sensory modality (e.g., sight) with another (e.g., sound or touch). By determining how the ‘hearing’ brain makes sense of light that would otherwise fall onto its eyes, we can directly explore the relationship between interaction and sensation.
Furthermore, sensory substitution offers the incredible potential of enabling the sighted to experience their visual world in a fundamentally new way, and the visual impaired to navigate unfamiliar environments without constraint. While sensory substitution has a long history, it continues to fail because most previous use the computer to recognise visual objects first. Unfortunately, no computer system can yet recognise a simple flower in a meadow with the same proficiency as the humble bumblebee. Our approach is different. By building on our biological research on human and bee perception, we let the brain do what it evolved to do: learn to perceive.