Someday, we will all control computers with our brain waves. You will think of a command, such as turning on the headlights in your car or closing a dialog box, and a computer will react instantly.
Until then, there is an incredible precise way to control computer interfaces.
Using nothing more than your own hands and fingers, gesture control systems seek to remove the one impediment to more immediate, ubiquitous computing: the stylus, keyboard, and mouse.
You wave your hand, and the computer starts playing a DVD. Or, you pick up a virtual object, turn it around, and throw it across the room.
Of course, Microsoft Kinect was the first raging success in this space. Recent games like Disneyland Adventures put the gamer into a virtual world.
By raising an arm or pointing at an object, you control the interface without a controller and immerse yourself into a realistic environment. Yet, Kinect is not the only gesture system around.
Several companies are developing gesture systems that take the basic kinetic model of hand gestures and finger movements to the next level.
1. Omek Interactive Beckon
One of the most interesting gesture control systems is the Omek Interactive Beckon. Essentially a development suite for making games and interactive content, the Beckon is unique in that it works with just about any off-the-shelf 3D camera and sensor system.
Many of the gesture control interfaces around today work only with a proprietary camera system. By supporting any 3D camera, Omek is more flexible in terms of the applications you can use at home or in an office setting.
The system works by making a skeleton representation of your entire body. These mapped points of movement are precise enough that the Omek system can read multiple gestures in a row and interpret them as a series of commands, and can automatically recognize a second person. (The Kinect can also recognize a new participant, but they usually have to stand still for a second.) The development kit for Omek Interactive is also unique in that the gesture commands do not require extensive programming.
2. Primesense Reach UX
The company responsible for the tech behind the Microsoft Kinect has their own product, called the Reach US, that consists of a camera and gesture control interface. Called Reach UX, the camera and UI is intended for controlling entertainment media like TV shows and movies.
For example, a CES concept showed how you can swipe through movie titles and pick the one you want by making a pick gesture. There are two cameras, one that uses an infrared sensor for detecting light and another CMOS sensor that detects movement. The two sensors are combined to form one map of movements. The main advantage of this media gesture control, other than not having to use a remote or a controller, is that you can search through a large collection and "pick" content faster.
If there is any company that is poised to challenge Microsoft for gesture control interfaces, it is SoftKinetic. The reason: the system is designed for developers to create their own gesture interfaces.
The DepthSense camera works by sending out an infrared beam of light and measuring the time it takes for the beam to return. That helps determine the shape and size of the person in front of the camera and their movements.
The IISU middleware component then maps those readings into the software development kit, which programmers can use for creating applications. The middleware scans an entire body, so the end-user application can consist of a full-body avatar.
4. Oblong Industries G-Speak
No overview of gesture control systems is complete without touching on Oblong Industries, the system that most experts say started the industry. Originally developed as a prototype for the movie Minority Report, the Oblong system eventually became a real product that is now used by major corporations like Boeing to visualize their own product development.
Oblong uses sensors installed all along a ceiling and on walls in a control room. You wear a glove the read precise movements. One example: you can "reach" inside a map and zoom in and out, swipe to the side, and pan up and down.
5. Opening up Kinect itself
Of course, one of the ways to go beyond Kinect is to open it up and see what people develop for it. Microsoft recently released the Kinect for Windows development platform, a way to use the Kinect controller to make custom games for PC, interactive sales demos, and entire applications for industries like health and transportation.
The system uses the same Kinect camera that works with the Xbox 360 but the dev kit now supports near-mode sensing from as close as 40cm away, skeletal tracking which can sense the body of two people, and now works with yup to four connected cameras.