Thursday, February 12, 2015

Project Tango Game Developer Event, Day 1.


A few weeks ago I was lucky enough to score an invitation to a Project Tango Game Developer Event at the Googleplex in Mountain View. The event was held last week and was tons of fun, so I thought I'd share my experience.

The event kicked off with Johnny Lee giving the audience a background on Project Tango's key features: Motion Tracking, Area Learning, and Depth Sensing. 



Motion Tracking focuses on the devices position and orientation in space. Instead of just using a gyroscope for tilt, Project Tango harnesses a crazy amount of computer vision technology to deliver the next level of interactivity. Players will now be able to walk through a game scene not by hitting a button, but by walking themselves.

As of now, the motion tracking on Project Tango doesn't seem as precise as Valve's solution or Oculus's Crescent Bay (sub milimeter?). It makes up for the lack of precision by delivering a completely un-tethered experience. 

Motion Tracking alone can cause drift, which the Project Tango team has solved with Area Learning.  Area Learning takes notice of features in an environment and uses those notes as reference points to help correct for Drift.  While moving about the environment, Project Tango can record an Area Definition File (ADF). Where Area Learning really shines is using these ADF's to recognize a space and "remember" if you've been there before.  Say you launch a Project Tango app in your living room. Assuming you've played in your living room before, then Tango will recognize feature, notice that you're in the living room, and use that data to calculate exactly what your position should be according to its ADF. 

Stinkdigital was on-hand to talk about their experience creating Bullseye's Playground, a Project Tango app for use in 4 Target(brand) stores. They combined ADF data from store interiors with a virtual winter environment to create a mashup of physical and virtual worlds.  Looking through their own eyes, players saw just your average looking store aisles. Through the lens of Project Tango, their Target Store was transformed into a wintery wonder land waiting to be explored. It looked like an awesome experience.

The final of Project Tango's key features we discussed was Depth sensing. A Depth sensing camera can be used to create a point cloud of the environment immediately in front of the Tango device.  In the Visual Effects industry, point clouds are commonly used to create photo-realistic computer generated models. While Project Tango can create models from point cloud data, it is very slow doing so. (lot's of data for a tablet to handle!)  By using point cloud data as a measuring device, physical barriers (think walls, pillars, chairs, people) can be accounted for in a virtual environment. 

The rest of our day was spent creating demos to be presented at the end of the next day. My first goal was to use Project Tango as a Virtual Reality Headset.  I wanted to take advantage of untethered Motion Tracking to freely walk around a virtual space. Their prototype HMDs are hilariously large, but what can you expect from a tablet-based HMD? ;)



Next time I'll talk about Day 2 of the event, Faceted Flight on Project Tango, and the Demo winners.

Do you like content like this? Sound off in the comments!

2 comments:

  1. HD? Do you mean the head mount? They had a handful of 3D printed prototypes. I think some were from Durovis. (straps said durovis)

    ReplyDelete