Oculus the creators of the Oculus Rift virtual reality headset which will be finally be launching as a consumer unit next month, have this month released a new avatar lip synchronisation plug-in, which has been designed to be used with Unity and takes the form of OVRLipSync.
The OVRLipSync plug-in has been created to automatically detect and convert an audio stream from speech into movements on a virtual reality character within Unity automatically without any user interaction required
The new Oculus plugging was unveiled at the Unity’s 2016 Vision VR/Summit, and uses the audio stream together with a set of values called ‘visemes’. Check out the demonstration video below to see the new Oculus Unity plug-in in action.
Oculus explain a little more in the documentation which has been released with the plug-in :
OVRLipSync is an add-on plugin and set of scripts used to sync avatar lip movements to speech sounds from a canned source or microphone input. OVRLipSync requires Unity 5.x Professional or Personal or later, targeting Android or Windows platforms, running on Windows 7, 8, or 10 or 8. OS X 10.9 and later are also currently supported.
Our system currently maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. These visemes correspond to expressions typically made by people producing the speech sound by which they’re referred, e.g., the viseme sil corresponds to a silent/neutral expression, PP appears to be pronouncing the first syllable in “popcorn,” FF the first syllable of “fish,” and so forth.
For more information on the new OVRLipSync plugin for Unity jump over to the Oculus website for details via the link below.
Source: Road To VR : OculusFiled Under: Technology News, Top News