A swipe of a finger, a twist of a wrist, the wave of an arm – all three are easier to do than using your fingers and hands on a keyboard or mouse when operating a computer.
It's what Tom Cruise's character did in the 2002 film "Minority Report." A highlight of the movie was watching Cruise use finger, hand, and arm swipes across multiple monitors in his computerized arsenal of computerized data-collecting tools.
Those ideas are already here in the consumer electronics world, most prominently in Microsoft's Kinect controller for the Xbox 360, where users activate a built-in sensor that identifies them and then "reads" their arm and body motions to respond to their inputs so they can be integrated into a game that's being played.
It's also the concept behind The Leap, an iPod-sized USB device that plugs into a computer and "senses" a user's hand and finger motions inside an eight-cubic-foot workspace around the unit. You can even use a pen or other writing or drawing instrument to transfer your motions to the computer screen. Essentially, the device allows movements much like what Cruise was able to do in "Minority Report."
The Kinect controller is already available for Xbox, and The Leap is available for a limited number of pre-orders for only $69, according to its manufacturer.
Could consumer tools for screen swiping replace or supplement keyboards and mice for enterprise workers in a wide swath of industries?
Actually, it's only a matter of time, especially due to the growing use of natural user interfaces like touchpads on tablet computers, says Andres Carvallo, the chief strategy officer of Proximetry, a wireless network performance management vendor, who shared his views with CITEworld.
"Given the popularity and sales of tablets, this technology would convert PCs into tablets overnight, hence appealing to a wider audience," said Carvallo.
"Combine this with something like Siri voice activation and you're done," said Carvallo, who previously was CIO with Austin Energy. "With this technology you can point and bring things in and out on a screen. It's really a phenomenal step."
In the enterprise, its potential uses are everywhere, he said.
"Think about first responders or utility workers, anybody who works with maps. Instead of using a menu on screen, they can just use their hands. Look at it in a control room or in a data center. This technology is perfect for that. These people are sifting through a lot of data in their work."
Then there are uses for air traffic controllers, business users, stock brokers, film editing, design and advertising, graphics and anywhere else where people are using multiple displays to do their jobs, said Carvallo.
It would also be helpful for people with physical disabilities or who have carpal tunnel injuries, he said, because it would allow them to use motions to control computers.
"The biggest thing is that hand gestures are the way that we communicate as human beings," said Carvallo. "The mouse and the keyboard are not natural things. We have to learn them. "
Carvallo said he's open to evaluating such devices, and he's even pre-ordered a Leap to try it out. "It's really just as revolutionary as the mouse when it first came out," he said. "If we eliminate the keyboard, we can 'type' in the air. Merge that with Siri and now you've got Star Trek."
Charles King, principal analyst with Pund-IT, said that similar swiping technologies are in the works in future generations of computer processors from Intel through the company's Perceptual Computing initiative. Intel recently released its second software developer's kit, the Intel Perceptual Computing SDK 2013 Beta 2, to help create user interfaces using gestures and speech recognition.
"What's going to happen with this next generation of cores is that they will have these technologies built into the silicon," said King. "It will parse the gestures and let people interact with their computers."
For enterprise use, the possibilities are big, said King.
Imagine, said King, if a user could wave a hand in front of a display to move to the next page of a book or magazine, scroll to the next page of a document or to manipulate drawings or designs in a 3D design suite. And imagine even more if surgeons could use gestures to review MRI or X-ray images without having to touch anything.
"Look at how quickly touch-enablement has caught on with computers – really big time," he said. "At a recent Intel Developer's Forum, touch enablement was the number one feature that users wanted. And gesture control is just one step beyond that."
Things are still in the very early stages at Intel, said King. "In the case of the internal concepts for computing, they've taken this in an interesting direction. They said, 'we're going to do this.' Then they produced an SDK and told their partners to have at it and that they'd love to see what they can make of it."
In fact, the concept of gesture-based computing is probably too big for any one vendor to develop on its own, said King.
"The possibilities are almost limitless," said King. "At the point where there are no physical limitations, it requires the imagination to draw the boundaries."