Hi,
I’m trying to perform Calibration between the Depth and RGB cameras on the Kinect using the CLNUI API. I’d like to access the Raw Near IR image that the kinect sees (which is the reflection of all the points of IR light from the real world) and *can identify a chessboard image in*. I’d need to use these to be able to develop a correspondence between the RGB and depth cameras, which aren’t currently calibrated together.
The current feeds that I have manged to pull from the kinect are all processed depth maps that can’t identify the Raw pull from the Kinect, but are interpretations of how the IR light hits the depth sensor.
I know that the libfreenect API does provide this pull, but I need to use the CLNUI API for some consistency. Any suggestions would be helpful.
Cheers!