What comes after touch computing? Full immersion.

in the air gesture technology

With Leap Motion's technology, a user can control a  PC with gestures.

Credit: Leap Motion

Five years ago, when the iPhone was introduced, touch computing seemed like a brave change from the longstanding paradigm of keyboard and mouse. Now, smartphones and tablets are so commonplace they don't even draw a glance, unless they happen to have a massive screen or some other odd feature. Even Microsoft has bowed to the inevitable, breaking more than 20 years of tradition to revamp Windows to work best on touch screens.

As computing moves, so must IT departments adapt. The rise of mobile, often-connected, touch screen devices has forced enterprises to make data and application functionality available in new ways -- or, at the very least, make existing desktop apps available through virtualization (a suboptimal approach that blogger and CITE speaker Brian Katz puts under the umbrella of "crapplications"). 

So what's next? Three recent stories suggest that the next wave in computing will be total immersion -- instead of having our computers as objects in front of us or in a pocket, they'll become virtual spaces that surround us.

First up, the long-awaited Leap Motion controller finally has a retail distribution channel -- it will be available in all U.S. Best Buy stores later this spring, the company announced this morning. Unveiled last May, the Leap is a $70 gadget that will allow you to control your computer using gestures within a virtual 8-foot cube in front of the screen. Think Microsoft's Kinect for PCs, or the virtual space that Tom Cruise used in the movie Minority Report.

Second, Google Glass is one step closer to reality. Unveiled last year at the Google I/O developer conference, Glass is envisioned as a set of glasses that will display small images -- think emails or Google Street View maps -- in front of users' eyes. Users will control the glasses with small head movements.

Glass is still in the prototyping stage, and Google has not made any announcements about apps, pricing, or availability. But it did allow developers to sign up for a test program last May. Yesterday, Google made good on its promise and said it will hold two events for these developers in San Francisco and New York in coming weeks. There, it will unveil the new Mirror API for the devices.

Finally, The Verge reports that Microsoft's demonstration of an immersive IllumiRoom display at Samsung's CES keynote may have been prepping the way for the next version of Kinect, which will project game images on the wall of your living room. The Verge thinks Microsoft may demonstrate the technology at the next E3 show in June.

User interface shifts tend to start with consumers -- early adopters willing to take the risk on looking silly or buying technology that doesn't quite work right out of the gate. But at some point, gesturing information workers could become a common sight in offices, right alongside standing desks and walking meetings.

Join the discussion
Be the first to comment on this article. Our Commenting Policies