• Share on Google+

Apple announced a new version of ARKit at WWDC on Tuesday. The relevant part of the keynote begins at the 15-minute mark.

New features include support for Pixar’s Universal Scene Descriptor support. Apple calls it USDZ; the ‘Z’ refers to it’s deliver as a .zip file. The format is open but it immediately lead to complaints that Apple had ignored the work everyone else in the industry had done on the new glTF format. However, USDZ and glTF are not intended for exactly the same purpose. USDZ is designed to support production interchange workflows and has better support for scaling to different types of hardware. On the other hand, glTF is optimized for delivering files over the internet in a compact way. USD support in game engines also isn’t completely out of the blue; Pixar and Epic announced a partnership to make USD a native format for Unreal Engine 4 over a year ago.

ARKit 2 has an important new feature that brings it up to parity with Google’s ARCore. With ARKit 2, multiple users can look at and interact with the same augmented reality objects. This is done via peer-to-peer sharing of USDZ files that describe the 3D space. Apple’s approach works differently than Google’s ARCore “Cloud Anchors” which provide the same shared AR feature. Rather than passing the data between devices, “when an Anchor is hosted, the anchor’s pose and limited data about the user’s physical surroundings is uploaded to the ARCore Cloud Anchor Service.” This could lead to future privacy concerns, because if somebody had access to the cloud anchors they could build a rough model of the space. Of course that privacy weakness is also a potential feature, since Google’s Visual Positioning Service, coming to a future version of Google Maps, uses shared anchors to locate your phone in 3D space (it’s not yet clear if VPS uses cloud anchors or some other technique).

ARCore still has a major advantages over ARKit in that it supports both Android and iOS. So anyone looking to write a cross platform app will want to take a close look at ARCore, or explore other shared-location engines like the one provided by 6d.ai.

For iOS-specific projects, though, ARKit does have some amazing new features that ARCore lacks. ARKit can now build a light probe from the camera image so that your objects can reflect the environment around them. Most impressive is the new eye-movement-and-blink-tracking capability on the front camera, which has already been demonstrated driving a user interface.

Siggraph Papers Reveal Promising Advances in Motion Capture Techniques

This year’s SIGGRAPH Papers are even more mind-blowing than usual. Of particular note for VR research are some new techniques for achieving motion capture from a monoscopic camera, which was in one case headmounted. Another two other papers could also have applications. A technique for extrapolating 3D depth at different interocular distances from a very narrow interocular could make 3D camera rigs more flexible and allow cell phones to take stereo photos that could be later viewed in VR. A technique for extracting subjects from a background automatically without a green screen could be useful for mixed reality videos.

Oculus Connect 5 Announced

OC5 will take place September 26-27.

It’s hard to believe that it’s been nearly 5 years since the first Oculus Connect, which means it’s been over 5 years since I started working on virtual reality full time. That’s a long time to paddle towards a future we all see coming but sometimes seems like it will never arrive.

It’s been five years of pioneering a medium, doing what’s never been done, and building the dream of VR together. Now, we hope you’ll join us as we look forward to five more years of defying the limits of reality.

Big News for Location Based VR Entertainment

Four big stories about LBE VR show the relative strength that this segment of the market is enjoying right now.

HTC has partnered with Dave & Busters to get the Vive into all but two of the company’s 114 US locations (the odd ones out lack the space, apparently). The first experience will be a Jurassic World tie-in created by The Virtual Reality Company, which boasts Steven Spielberg as an adviser. Dave & Busters previously experimented with adding VR via a VRCade free roaming experience installed in one location in the Bay Area. VRCade, now VRStudios, is managing the incredibly ambitious logistics for the new roll-out.

Comcast and Songcheng backed Los Angeles studio SPACES revealed that their first LBE VR experiences will be Terminator: Genisys and Terminator: Salvation. SPACES will be opening four locations including one in Los Angeles and another at Songcheng’s Hangzhou theme park. Subscribers to their newsletter will get the first try at the new attraction (Ian Hamilton already had a go). SPACES has paid particular attention to giving users media that they can share. All players will have their faces scanned so that their avatars look like them, and they will take home a shareable video mixing together first and third-person viewpoints of their experience (apparently they filed for patents on their processes which could lead to conflict since every LBE experience will eventually need to implement that; Neurogaming’s Polygon already has it).

The Void is expanding to nine more locations, bringing their total to fifteen. The Los Angeles area alone will now have four (Hollywood and Santa Monica are joining the existing locations in Anaheim and Glendale).

iFly is rolling out its VR skydiving experience to many more locations world wide after a successful test run. Tested tried it out and reported back below (also, a review of Rec Room’s new Rec Royale mode).