How to Hack Microsoft Kinect Sensing Console of Xbox 360


Microsoft launched the Kinect motion sensing controller for the Xbox 360 last year, and it enroll itself into Guinness Book of World Records as the fastest selling consumer device of all time, selling over 10 million units in just two months. Kinect isn’t just a console accompaniment; it has the potential to unlock fun new ways of human machine interaction. While Nintendo and PlayStation Move require you to hold a wand or pointing device, Microsoft’s Kinect requires no such things, strongly emphasizing on its “you are the controller” in adverts. So how has this simple looking device shaken the world of gaming and personal technology sector to some extent to become a revolution of sorts? Well, much credit goes to Microsoft and plain old human ingenuity.


Today, we will describe the procedure of hacking Kinect console of Xbox 360 as per your according, actually done by Harishankar Narayanan; a Chennai based game developer who likes tinkering with gadgets. He wasted no time in recognizing the potential of the tool and after hacking it – developed a unique, personalized solution for his needs. Here’s his story in short and point basis –


Kinect Console for xbox 360


What is Kinect?

If you have been using previous versions of Xbox 360, then you must be aware of gaming console and joysticks. Kinect is advance version of gaming console, which has an advance camera which is more than a normal web camera. Kinect’s camera has more capability than a normal web camera, for example, a normal web cam scans an array of colors in front of the lens, and each pixel represents the color information of the object. Kinect on the other hand, is designed to give you information on the distance between the camera and each pixel target in addition to the aforementioned color values. With this information, computers can now guess how far each object is from the camera. Kinect also has four Microphones that can detect the origin of sound.


Is Kinect Hack Friendly?

This is the surprising factor for most of the readers; Microsoft designed Kinect to be open intentionally. Yes, believe it or not, but It is not tough to read Kinect’s output using computers or embedded devices – and there are a bunch of those out there already for users to buy. The reason behind its open nature is that most of the innovation and hard work is inside Kinect’s software system that detects skeleton position from the depth map and its database – and as things stand, it’s very tough to reproduce a comparatively reliable system for skeleton detections. It works just like any USB camera with some additional information. Moreover, Microsoft didn’t deploy any encryption or security layer on the Kinect to make it work only with Xbox. The Software giant has decided to keep the device open and easily hack able – a great move for making custom gesture enable solution, Microsoft has now deemed hacking Kinect as completely legal. So honestly, what’s stopping you from tinkering with it?


Hack Procedure of Kinect Console

You need a Kinect – not the whole Xbox 360, and any computer that can run a modern Operation System (Windows or Mac OS X). In this case, we have used Mac Mini just to have a portable and compact solution. If you want to use Windows OS then you have to download Kinect SDK for Windows which is now available Free for non commercial use. Here’s how Harishankar controlled a Tata Sky set-top box through a Kinect motion sensor attached to a Mac Mini – Kinect hacking doesn’t require a super performance system, you can do it even on Pentium 4 PC, It’s important that you decide and choose the right operating system and programming language before you begin. The decision mostly depends on whether open source modules are already available for the system you are planning to build. Harishankar chose the hardest way to do so, because on Mac OS you need to port the code of Kinect SDK from its Windows counterparts.


1) First step to hack Kinect is reading depth map of video stream and the VGA video streams. This has been made easy by the hackers around. There are drivers or Open Source code available to read Kinect’s data into your program. You can also play around with changing the LED indicator colors and the orientation motor which rotates the Kinect’s axis.


2) Now, we need to do skeleton detection, this task might be little tricky to setup, depending on the computer environment you’re trying it on. Writing algorithms of your own will take ages to complete unless they are crowd sourced – ask us, we’ve been trying it for the past six months. You can check PrimeSense’ SDKs, for instance. On implementing these techniques, you should get the details about a person’s skeleton viz. 3D positions of each joints of the user’s body.


3) If you choose the Windows Kinect SDK that was released recently, you can jump straight to the next step. The SDK manages to read input and detect skeleton directly. Microsoft Kinect SDK seems to be quite sophisticated for hacking, so do take a look around.


4) Now that you have the user’s positions in 3D space i.e. x, y and z co-ordinates, you will have all the information regarding the user’s right hand or head, for example. This is when you can start playing around with the data from Kinect. You can assign the information to a 3D puppet to imitate the user’s movements. If you are building an interactive system, the best part is detecting user’s gestures. You can’t make direct postures to make interaction; it usually has to be gestures like waving your hand, stretching or complicated gestures like drawing some shapes. Though there are some open source tools to detect gestures from skeleton data, it was tough to find one that worked for our setup, so we decided to write one ourselves. We programmed our system to detect user waving his hands to change TV channels and some complicated gestures like drawing the TV channel logo to jump directly i.e. U for UTV, circle for SUN TV etc.


5) Once you detect gestures successfully, it’s time to decide with which device you want to put your interactive system to work. We did this with a 32-inch HDTV, Tata Sky set-top-box and air conditioner. You can combine the interactive system with any day-to-day equipment to manipulate them in ways you’ve never done before.


6) Since most of Harishankar’s used devices have IR (Infrared) remote controllers, he decided to pass the information to those devices using an IR emitter. The best device in the market was USB-UIRT. This device can be connected to your computer via USB and there are drivers available from the manufacturer to make it work with your program. In our case, there was no driver available for Mac Computers, so we had to do our own hack again. This device can both, transmit and receive IR signals.


7) Once you’ve decided which button on the remote to replace with gestures, you need to tap the actual data sent using IR signals from the remote. For this, we wrote a simple program that displays the number that is being transmitted then each button is pressed on the remote and stored it within our program. Similarly, we tapped the TV remote, Tata Sky remote and air condition remote. Almost every device will have a set of unique codes transmitted. So you don’t have to worry about multiple devices responding to the same signals.


8.) Just to make a quick test you can send the same signal as output using the UIRT device to see if the TV to set-top-box responds to your signals. The UIRT device can replace just any IR remote controls and it should work pretty well – it did in our case. Once everything is in place, integrate your gesture recognizing code with your UIRT drivers. Now, combine your hard work so that your program sends a particular signal when it finds the specific gesture through Kinect, hope this will work for you.


Other Kinect Hacks

1) Replacing your computer’s mouse and keyboard. Usually works best for a computer connected to a TV in your living room.


2) 3D scanning of human face and other objects, 3D holographic visuals using head tracking.


3) Integrating other electronic equipments to replace remote controls and switches.


4) Allowing your robot to see in the 3D space or making it obeys your gesture commands – this is for advanced robotics fan boys.


5) Virtual mirrors; you can overlay funny 3D cartoon character over the user’s skeleton on the video to imitate the user’s movements. Just imagine Spiderman doing your movements live when you watch a mirror. You can see this live – you webcam doing this in some level.


6) Interactive projectors; imagine how convenient going through slides will be on a Kinect controlled smart projectors.