New realms of interactivity and immersion are being created by the perpetual evolution of VR. Audio is an often overlooked ingredient for maximising immersion and interactivity. This project uses convolution, an audio effect normally used for recreating the sound of natural spaces, in an unusual way.
In the environment there are are multiple orbs, some of which are sound sources, others ‘convolution kernels’ or impulse responses. The sound source orbs can be time stretched, pitch shifted and looped at various start points and lengths by the controllers. Each of these orbs react with any convolution orbs which are in close proximity, resulting in a new hybrid sound which is the shared frequency components of the input and the selected convolution orb. The players microphone is also activated which allows them to use their voice to explore the sounds.
The audio engine was created in Max MSP.
the IRCAM spat library was used for convolution and binaural rendering.
Unity was used for the mappnig, tracking, visual interface and interactions.
UnityOSC was used to send data to Max MSP for processing