A quick news post: Microsoft Research have released a non-commercial beta of their Windows SDK for the Kinect – the motion sensing controller for the XBox.
The SDK is available here: http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/
There is perhaps a bigger discussion to be had around the role of next generation interfaces and how form-factor, input control, and haptic feedback have rapidly moved from the nice idea to the commercial mainstream and how this will impact on the role and function of technology in learning and teaching (as well as life more generally), but this release is a significant step forward in the development of gestural interfaces (One of the key tech developments in MIT’s tech review this year).
There have already been a number of interesting projects that have hacked the Kinect to run in windows and I’m looking forward to seeing what develops with a more robust, documented, and supported [perhaps?] SDK.
It’s already been used to control games, AR drones at dev8D, interactive video conferencing, and offer some forms of basic screen interactions (mouse-like and touch screen like).
For some examples:
It is also of note that one of the responses to this years dev challenge at Open Repositories 11 was a repository interface controlled by Kinect. [I'll post a link, screencast if I find one].