Los Angeles, CA (May 8, 2018)—”Five years ago, VR was still a pipe dream,” commented Jonnie Ross, co-founder, with Cosmo Scharf, of the annual VRLA virtual reality expo. VRLA started in 2015 with a meet-up of about 150 people; this year’s two-day conference, May 4-5 at the L.A. Convention Center, featured more than 200 exhibitors, sponsors and media partners.
“We’ve gravitated toward VR and immersive computing for different reasons, but we share this common desire to use technology to change our reality for the better,” said Ross, introducing the first day’s keynote speakers.
VR has come a long way in those five years, but “this race is not a sprint, it’s a marathon,” according to Hugo Swart, Qualcomm’s team leader for XR (virtual, augmented and mixed reality—VR, AR and MR—all fall under the XR, or extended reality, umbrella). Qualcomm is now on its third generation of XR-enabling microprocessors, Snapdragon 845, which allows 6DOF, or six degrees of freedom, in XR—that is, movement forward/backward, up/down and left/right.
Noting XR technology’s parallels with smartphones, which iterated through better processors, better displays, improved power consumption, more GPU and more content during their evolution from a “brick” to today’s devices, he continued, “It’s a multi-year cycle where we get better and better and then, over the course of five, 10 years, we get to something truly amazing.”
There are three key pillars to immersion, said Swart: interactivity, video and audio. “In VR, for the user to feel immersed, sound needs to match exactly what you see, and what you don’t see.” VR has evolved from stereo to object-based audio to now 3D spatial audio with ambisonics, he said, revealing that Qualcomm recently released its 3D Audio Plugin developers kit.
Audio within the XR experience also includes voice commands, said Swart: “You want to interact in the virtual world with your voice.” Qualcomm has leveraged background noise elimination, multi-mic capture and other features gleaned from its work with smartphones to ensure clean voice input to NLU algorithms, he said. NLU—natural language understanding—falls under the umbrella of AI or artificial intelligence.
Out on the show floor in the South Hall, Greg Morgenstein, CTO, and Matt Marrin, CEO, co-founders of Hear360, had some news—the company’s 8ball “omni-binaural” microphone has just been awarded a US patent. The donut-shaped device, which captures eight channels of audio via four pairs of microphones, incorporates a stand mount that allows it to be positioned directly beneath—and therefore out of sight of—a VR camera.
The 8ball mic began shipping at the end of February and has already reached more than 100 customers, Marrin reported. The package includes a software suite that, with the mic, provides an end-to-end workflow for recording and streaming spatial audio. The H360player works on major web browsers and Android mobile devices, while 8ball audio works natively with SamsungVR, and can also be encoded and exported for use in Facebook, YouTube and Unity.
Founded 15 years ago, Sweden-based digitally optimized sound solutions provider Dirac Research initially offered room correction products before expanding into automotive and smartphone audio (250 million smartphones reportedly ship annually with Dirac products embedded). Its spatial audio business unit was formed less than a year ago, reported Nadeem Firasta, VP of product strategy and business development for North America, and has just begun to productize the fruits of its research. VRLA was only the third public showing for the new Dirac VR product following its launch at CES in Las Vegas.
Central to Dirac’s approach to spatial audio is its dynamic HRTF (head-related transfer function). Rather than attempt to individualize HRTFs, a time-consuming, resource-intensive and costly process typically involving a dummy head, Dirac has built a library of measurements of real heads, captured to one degree of angular motion, according to Firasta; the company then took the essential parameters and added its secret sauce. Leveraging its background with smartphone CPU optimization and memory usage, the company’s newly available second-generation Dirac VR 3D audio platform provides localization with smooth rendering, even under extreme head-tracked movement, in both the horizontal and vertical planes.
Audeze (aud-a-zee) has established a reputation in the video game world for its planar magnetic technology headphones and is now moving into the world of XR, revealed company co-founder Sankar Thiagasamudram. The Mobius Creators Edition headphone, which incorporates a head-tracker, comes bundled with the latest 3D audio plug-ins from Waves—the B360 Ambisonics Encoder and the Nx Virtual Mix Room. Mobius Creators Edition enables stereo or multi-channel audio mixing with room emulation and head-tracking capabilities in any DAW.
Mobius operates standalone with hardware processing or with the plug-ins. As hardware, it appears as an 8-channel sound card to which tracks may be appropriately routed. Together, the Audeze and Waves products enable content creators to combine mono, stereo, 5.1, 7.1 and ambisonics elements, work on a project while monitoring in ambisonics with real-time head tracking, then render for Oculus Rift/Go, HTC Vive, YouTube 360, Facebook 360 and other platforms.
VRLA • www.virtualrealityla.com