Apple announced a new version of ARKit at WWDC on Tuesday. The relevant part of the keynote begins at the 15-minute mark.
New features include support for Pixar’s Universal Scene Descriptor support. Apple calls it USDZ; the ‘Z’ refers to it’s deliver as a .zip file. The format is open but it immediately lead to complaints that Apple had ignored the work everyone else in the industry had done on the new glTF format. However, USDZ and glTF are not intended for exactly the same purpose. USDZ is designed to support production interchange workflows and has better support for scaling to different types of hardware. On the other hand, glTF is optimized for delivering files over the internet in a compact way. USD support in game engines also isn’t completely out of the blue; Pixar and Epic announced a partnership to make USD a native format for Unreal Engine 4 over a year ago.
ARKit 2 has an important new feature that brings it up to parity with Google’s ARCore. With ARKit 2, multiple users can look at and interact with the same augmented reality objects. This is done via peer-to-peer sharing of USDZ files that describe the 3D space. Apple’s approach works differently than Google’s ARCore “Cloud Anchors” which provide the same shared AR feature. Rather than passing the data between devices, “when an Anchor is hosted, the anchor’s pose and limited data about the user’s physical surroundings is uploaded to the ARCore Cloud Anchor Service.” This could lead to future privacy concerns, because if somebody had access to the cloud anchors they could build a rough model of the space. Of course that privacy weakness is also a potential feature, since Google’s Visual Positioning Service, coming to a future version of Google Maps, uses shared anchors to locate your phone in 3D space (it’s not yet clear if VPS uses cloud anchors or some other technique).
ARCore still has a major advantages over ARKit in that it supports both Android and iOS. So anyone looking to write a cross platform app will want to take a close look at ARCore, or explore other shared-location engines like the one provided by 6d.ai.
For iOS-specific projects, though, ARKit does have some amazing new features that ARCore lacks. ARKit can now build a light probe from the camera image so that your objects can reflect the environment around them. Most impressive is the new eye-movement-and-blink-tracking capability on the front camera, which has already been demonstrated driving a user interface.
Testing out the new ARKit 2 leftEyeTransform and rightEyeTransform features. Haven't played with lookAtPoint yet but hoping it'll let me fine-tune controls.
Maybe a new @playnosezone feature coming with eye lasers to let you aim at targets in a 3rd dimension. pic.twitter.com/qlRSnNDxEV
— Brad Dwyer (@braddwyer) June 4, 2018
Control your iPhone with your eyes. Just look at a button to select it and blink to press. Powered by ARKit 2. #ARKit #ARKit2 #WWDC #iOS pic.twitter.com/ow8TwEkC8J
— Matt Moss (@thefuturematt) June 7, 2018
Yesterday #ARKit 2 was announced and the @unity3d plugin was released! In ARKit 2 you can use Environmental Probes which allows your augmented objects to realistically reflect light from their surroundings. #AR #UnityTips #MadeWithUnity pic.twitter.com/vlA727iFl9
— Dan Miller (@DanMillerDev) June 5, 2018
Siggraph Papers Reveal Promising Advances in Motion Capture Techniques
This year’s SIGGRAPH Papers are even more mind-blowing than usual. Of particular note for VR research are some new techniques for achieving motion capture from a monoscopic camera, which was in one case headmounted. Another two other papers could also have applications. A technique for extrapolating 3D depth at different interocular distances from a very narrow interocular could make 3D camera rigs more flexible and allow cell phones to take stereo photos that could be later viewed in VR. A technique for extracting subjects from a background automatically without a green screen could be useful for mixed reality videos.
Mo2Cap2: https://t.co/p50uPNiF7t Potentially useful concept in both AR & MR HMDs, but might run up against the counter-trend towards shallower optics, due to needing a protruding sensor for clear view of the body pic.twitter.com/11qgnc8u0P
— Ben Ferns (@ben_ferns) May 17, 2018
MonoPerfCap: Human Performance Capture from Monocular Video https://t.co/SgbUZePrT7 (note a photogrammetry scan of the person is provided to the system up front – still impressive though!) pic.twitter.com/ds1ReZf9Ql
— Ben Ferns (@ben_ferns) May 16, 2018
Oculus Connect 5 Announced
OC5 will take place September 26-27.
It’s hard to believe that it’s been nearly 5 years since the first Oculus Connect, which means it’s been over 5 years since I started working on virtual reality full time. That’s a long time to paddle towards a future we all see coming but sometimes seems like it will never arrive.
It’s been five years of pioneering a medium, doing what’s never been done, and building the dream of VR together. Now, we hope you’ll join us as we look forward to five more years of defying the limits of reality.
Big News for Location Based VR Entertainment
Four big stories about LBE VR show the relative strength that this segment of the market is enjoying right now.
HTC has partnered with Dave & Busters to get the Vive into all but two of the company’s 114 US locations (the odd ones out lack the space, apparently). The first experience will be a Jurassic World tie-in created by The Virtual Reality Company, which boasts Steven Spielberg as an adviser. Dave & Busters previously experimented with adding VR via a VRCade free roaming experience installed in one location in the Bay Area. VRCade, now VRStudios, is managing the incredibly ambitious logistics for the new roll-out.
Comcast and Songcheng backed Los Angeles studio SPACES revealed that their first LBE VR experiences will be Terminator: Genisys and Terminator: Salvation. SPACES will be opening four locations including one in Los Angeles and another at Songcheng’s Hangzhou theme park. Subscribers to their newsletter will get the first try at the new attraction (Ian Hamilton already had a go). SPACES has paid particular attention to giving users media that they can share. All players will have their faces scanned so that their avatars look like them, and they will take home a shareable video mixing together first and third-person viewpoints of their experience (apparently they filed for patents on their processes which could lead to conflict since every LBE experience will eventually need to implement that; Neurogaming’s Polygon already has it).
The Void is expanding to nine more locations, bringing their total to fifteen. The Los Angeles area alone will now have four (Hollywood and Santa Monica are joining the existing locations in Anaheim and Glendale).
iFly is rolling out its VR skydiving experience to many more locations world wide after a successful test run. Tested tried it out and reported back below (also, a review of Rec Room’s new Rec Royale mode).