It’s true, people are getting very excited about the idea of Kinect coming to desktops and laptops (it’s available now): Imagine being able to control Windows 7 or 8 with a wave of your arm or finger. However, that seems like child’s play when compared to the new level of interaction the MicrosoftKinect Fusion research project could bring to the wildly popular sound and gesture-control gaming interface.
It’s called fusion because the technology fuses 3D data on top of the room mapping and gesture-aware sensors in the $150 Kinect device, and the result is kind of remarkable. Kinect Fusion not only sees you in the room, but it sees your volume and the volumetrics of every other object in the room to build a complete three-dimensional picture.
A picture is cool, but it’s what applications working with Kinect Fusion can do with the information that’s a mind-blower. Microsoft gave us a demo where the Kinect device was connected to a PC running special software that turns the Kinect camera into a 3D scanner. The resulting live, 3D video image can then interact with virtual objects. In the example we saw, thousands of yellow balls rolled over the surface of the video images (which actually has the 3D model under it). It did the same with a person: the model is built within seconds and soon the yellow balls were affixing themselves to the Microsftie’s body—at least on screen.
See the full demo, which we got a look at during CES 2012, in the video. What uses can you envision for Microsoft’s research project?