Google has revealed this week that it is now currently developing a way to improve the audio when using its Google Cardboard 360 degree virtual reality headset.
As you might already know the Google Cardboard VR headset provides an affordable way to enjoy 360 degree imagery using your smartphone as the processor. But although the imagery is 360 degree the sound received by the user is normally just stereo or worse and can affect the immersive qualities trying to be achieved.
Google is now developing a new version of its Google Cardboard software to provide support for spatial audio, explaining more over on their Android Developers Blog.
Human beings experience sound in all directions—like when a fire truck zooms by, or when an airplane is overhead. Starting today, the Cardboard SDKs for Unity and Android support spatial audio, so you can create equally immersive audio experiences in your virtual reality (VR) apps. All your users need is their smartphone, a regular pair of headphones, and a Google Cardboard viewer.
Sound the way you hear it : Many apps create simple versions of spatial audio—by playing sounds from the left and right in separate speakers. But with today’s SDK updates, your app can produce sound the same way humans actually hear it. For example:
– The SDK combines the physiology of a listener’s head with the positions of virtual sound sources to determine what users hear. For example: sounds that come from the right will reach a user’s left ear with a slight delay, and with fewer high frequency elements (which are normally dampened by the skull).
– The SDK lets you specify the size and material of your virtual environment, both of which contribute to the quality of a given sound. So you can make a conversation in a tight spaceship sound very different than one in a large, underground (and still virtual) cave.
Optimized for today’s smartphones : We built today’s updates with performance in mind, so adding spatial audio to your app has minimal impact on the primary CPU (where your app does most of its work). We achieve these results in a couple of ways:
– The SDK is optimized for mobile CPUs (e.g. SIMD instructions) and actually computes the audio in real-time on a separate thread, so most of the processing takes place outside of the primary CPU.
– The SDK allows you to control the fidelity of each sound. As a result, you can allocate more processing power to critical sounds, while de-emphasizing others.
For more information on the new spatial audio support for Google Cardboard jump over to the Google Android Developers Blog for details via the link below.