From depth RGB to depth RAW |
|
|
Posted: 25 November 2010 08:38 AM |
[ Ignore ]
|
|
New Member
Total Posts: 6
Joined 2010-11-25
|
Hi,
regarding the CL NUI Platform - Kinect driver, is there anyone (I guess AlexP would be the one) who could explain me how the conversion from depth in RGB (the buffer return by GetNUICameraDepthFrameRGB32) to depth RAW ( the buffer returned by GetNUICameraDepthFrameRAW) is done?
Clearly, it must be some bit operation, as the rgb buffer is a PWORD while the raw one is a PUSHORT.
I would really like to know how the RGB bits are written into the raw buffer… I am trying to do some manipulation on the rgb depth and then reconstruct the raw buffer.
Thank you very much!
-Fabrizio
|
|
|
|
|
Posted: 26 November 2010 03:32 PM |
[ Ignore ]
[ # 1 ]
|
|
Administrator
Total Posts: 585
Joined 2009-09-17
|
Fabrizio,
It is not that depth RGB data is converted into RAW data, but the other way around. The camera gives us the data in RAW format. As explained in the documentation, the data is 11-bits wide and it is stored in the 16-bit format passed to you through the CLNUI API. The color RGB data format should only be used for data visualizations and not for actual depth computation. So using the RAW data is the right way to get the depth data.
AlexP
|
|
|
|
|
Posted: 26 November 2010 04:03 PM |
[ Ignore ]
[ # 2 ]
|
|
New Member
Total Posts: 6
Joined 2010-11-25
|
Hi AlexP,
thanks for you answer.
I know that Z should be used only in its raw, 1-channel format, but the fact is that I need to stream this Z buffer over a network and see what happens after the quantization introduced by the transcoding.
Now, since I’d like to use VP8, I can’t encode directly an 11bit, 1-channel image, simply cause VP8 has no clue on what to do. Hence the need of getting the RGB version of the depth.
The thing I’d like to do is to:
1) get 11-bit RAW depth
2) get also the 16bit RGB depth
3) encode, send, receive and decode the RGB depth
4) transform back the 16 bit RGB into an 11bit raw buffer
5) compare the two “raw” depth images on each side of the stream
Do you have any suggestion on how to do the point 4?
If you could tell me how you obtain the 16bit buffer from the 11bit one it would probably be fine enough cause I could apply the inverse transformation i guess…
Thanks a lot,
-Fabrizio
|
|
|
|
|
Posted: 26 November 2010 04:32 PM |
[ Ignore ]
[ # 3 ]
|
|
Administrator
Total Posts: 585
Joined 2009-09-17
|
Since the VP8 is a lossless compression you would lose the accuracy of the depth data. If this is not an issue for you, you can do this.
To answer your point #4:There is no conversion between 11-bit to 16-bit. The data is simply expanded to 16-bits out of which only 11-bits are used (range [0-2047]).
AlexP
|
|
|
|
|
Posted: 26 November 2010 05:32 PM |
[ Ignore ]
[ # 4 ]
|
|
New Member
Total Posts: 4
Joined 2010-11-26
|
Hi,
I’m just wondering if there is not an error in the RGB depth map : the colors are
white blue cyan yellow gren yellow orange red
The two yellow sections are quite surprising and make no sense for me.
However the raw function is suficient and allows reconstruction.
So thank you very much for this driver!
Lucas
|
|
|
|
|
Posted: 27 November 2010 01:38 AM |
[ Ignore ]
[ # 5 ]
|
|
New Member
Total Posts: 6
Joined 2010-11-25
|
AlexP - 26 November 2010 04:32 PM Since the VP8 is a lossless compression you would lose the accuracy of the depth data. If this is not an issue for you, you can do this.
AlexP
Yeah, in fact I want to see how much accuracy you loose and whether the transmitted result is useful or completely rubbish…
AlexP - 26 November 2010 04:32 PM To answer your point #4:There is no conversion between 11-bit to 16-bit. The data is simply expanded to 16-bits out of which only 11-bits are used (range [0-2047]).
AlexP
Right, so to get back the raw Z I can simply take the first 11-bits of the RGB image…correct?
Thanks a lot,
-Fabrizio
|
|
|
|
|
Posted: 02 December 2010 12:55 PM |
[ Ignore ]
[ # 6 ]
|
|
Administrator
Total Posts: 585
Joined 2009-09-17
|
@tommygun,
That is not correct. The RGB depth output transforms the 12-bit depth space into a discrete segments of color shades. In order to do a reverse transform you are talking about transforming 24-bit color space back to 12-bit. You cannot just truncate the 24-bit color into 12-bit and expect to get the correct depth information.
AlexP
|
|
|
|
|
Posted: 04 December 2010 05:36 PM |
[ Ignore ]
[ # 7 ]
|
|
New Member
Total Posts: 1
Joined 2010-12-04
|
Hello AlexP thanks for your Kinect drivers and library
You mention about some Kinect documentation in this thread, may i know where to find it? I’m looking for formal documentation on for instance differences between GetUICameraColorFrameRAW, GetNUICameraColorFrameRGB24 and GetNUICameraColorFrameRGB32, how many bits, size of returned inputs and stuff. Thanks!
|
|
|
|
|
Posted: 05 December 2010 01:45 PM |
[ Ignore ]
[ # 8 ]
|
|
Administrator
Total Posts: 585
Joined 2009-09-17
|
@living_dreams,
The bit depths are pretty self explanatory and here they are:
GetNUICameraColorFrameRGB32 - returns 32-bits per pixel, the R, G, B channels are 8-bits wide
GetNUICameraColorFrameRGB24 - returns 24-bits per pixel, the R, G, B channels are 8-bits wide
GetUICameraColorFrameRAW - returns 8-bits per pixel, the pixels are arranged in a standard Bayer pattern
Hope this helps,
AlexP
|
|
|
|
|
Posted: 15 February 2011 12:06 AM |
[ Ignore ]
[ # 9 ]
|
|
New Member
Total Posts: 3
Joined 2011-02-13
|
im extremely sorry for the long post:
im using c# ive just found this code on msdn to convert the images from the driver to a system.drawing.bitmap
int stride = 640 * 3; Bitmap newBitmap = new Bitmap(640, 480, stride, System.Drawing.Imaging.PixelFormat.Format24bppRgb, colorImage.ImageData); pictureBox1.Image = newBitmap;
int stride2 = 640 * 2; Bitmap newBitmap2 = new Bitmap(640, 480, stride2, System.Drawing.Imaging.PixelFormat.Format16bppRgb565, depthImage.ImageData); pictureBox2.Image = newBitmap2;
the colored image is perfect so is the corrected 8 and 32 depth but i was wondering if i can get the other depth images so ive tried :
GetCameraDepthFrameCorrected12
GetCameraDepthFrameRAW
and the result is attached, if i understood correctly raw depth is represented with 2 bytes, il be needing the 8bits from the 1st byte and 3bits from the next byte, that give me a value 0-2047 but just to make sure i divided by 9(to get a value 0-255) and put it in a 24RGB image, i was expecting something more like the corrected 8 result, is there anything im missing ? or is there another way to get the actual values ?
http://img829.imageshack.us/i/depthcorrected12.png/
http://img408.imageshack.us/i/depthraw.png/
http://img573.imageshack.us/i/depthrawvaluemod.png/
http://img577.imageshack.us/i/depthcorrected8.png/
thank you =)
|
|
|
|
|
Posted: 17 February 2011 07:40 PM |
[ Ignore ]
[ # 10 ]
|
|
New Member
Total Posts: 3
Joined 2011-02-13
|
ok i was able to get the distance but there was about 8 cm error so i added them manually, here is the code to help c# users if they cant get the raw depth values. if theres a better way im sorry but this may be considered for begginers.
im using the “CLNUIDevice.GetCameraDepthFrameRAW” function, the parameters are the same as the the CLNUIDeviceTest source code (the one that comes with the driver)
/////////////////////////////////////////////// //msdn code to convert an intptr to an image if u know the width and height and the length of bits per pixel int strideIN = 640 * 2; Bitmap b = new Bitmap(640, 480, strideIN, System.Drawing.Imaging.PixelFormat.Format16bppRgb565,depthImage.ImageData);
///////////////////////////////////////////// //unsafe code to to create an image ,i use it to get the bytes from the bitmap i just created BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),ImageLockMode.ReadWrite, PixelFormat.Format16bppRgb565); int stride = bmData.Stride; System.IntPtr Scan0 = bmData.Scan0; unsafe { byte* p = (byte*)(void*)Scan0; int nOffset = stride - b.Width * 2; int nWidth = b.Width * 2; for (int y = 0; y < b.Height; ++y) { for (int x = 0; x < nWidth; ++x) { if (p[1] == 7)//this means that pixel is black (probably a transparent object or reflect too much light i think) { picArray[x][y] = 0; } else { int s = p[0] + p[1] * 255; //picArray is a 640x480 array of integers picArray[x][y] = s;//disparity } ++p; ++p; ++x; } p += nOffset; } } b.UnlockBits(bmData); /////////////////////////////////////////////// //now convert to actual distance using the code i found here but i added 8cm because it worked for me
int disparity = picArray[x][y]; int calc = (int)(100 / (-0.00307 * disparity + 3.33))+8;
//note: u have to allow unsafe code for this to work, and dont get alarmed if u show this image in a picture box and its only blue and green
hope this helps others ,thanks to code labs for providing this driver =)
|
|
|
|
|
Posted: 23 February 2011 12:43 PM |
[ Ignore ]
[ # 11 ]
|
|
New Member
Total Posts: 6
Joined 2011-02-14
|
hf.radwan89 Thanks for the code! I’ve been scrambling to find some C# code to convert the raw depth data to a usable distance. You should post your project somewhere. I found a different distance calc somewhere else on the net; DIST_CM=TAN(DEPTH_VAL/1024+0.5)*33.825+5.7
|
|
|
|
|
Posted: 24 February 2011 10:12 PM |
[ Ignore ]
[ # 12 ]
|
|
New Member
Total Posts: 3
Joined 2011-02-13
|
hey btshrewsbury, thanks for the comment. i just tried the distance calc but it always gave me
“24.17868 cm 0.2417868 m disparity=754” disparity value would change but the distance didnt, so il stick with the distance one im using for now,but thanks for your help. so far im trying to do background subtraction using depth, il be uploading what ive done as soon as im done reducing noise =)
|
|
|
|
|
Posted: 14 March 2011 03:45 PM |
[ Ignore ]
[ # 13 ]
|
|
New Member
Total Posts: 1
Joined 2011-03-14
|
Thank you so much for sharing your code, hf.radwan89!
I spent quite some time trying to figure out how to do it and this finally did the trick.
I had to make two minor adjustments to your code, though, in order to make it work for me. Cause the depth image I got from the data with the original code was rotated 90° and missed every 2nd line.
I switched the for loops to get the image orientation straight:
for (int x = 0; x < nWidth; ++x) { for (int y = 0; y < b.Height; ++y) {
and I deleted the incrementation of x at the end of the second for loop:
++x;
|
|
|
|
|
Posted: 22 December 2011 10:40 AM |
[ Ignore ]
[ # 14 ]
|
|
New Member
Total Posts: 1
Joined 2011-12-22
|
hf.radwan89 - 17 February 2011 07:40 PM ok i was able to get the distance but there was about 8 cm error so i added them manually, here is the code to help c# users if they cant get the raw depth values. if theres a better way im sorry but this may be considered for begginers.
im using the “CLNUIDevice.GetCameraDepthFrameRAW” function, the parameters are the same as the the CLNUIDeviceTest source code (the one that comes with the driver)
/////////////////////////////////////////////// //msdn code to convert an intptr to an image if u know the width and height and the length of bits per pixel int strideIN = 640 * 2; Bitmap b = new Bitmap(640, 480, strideIN, System.Drawing.Imaging.PixelFormat.Format16bppRgb565,depthImage.ImageData);
///////////////////////////////////////////// //unsafe code to to create an image ,i use it to get the bytes from the bitmap i just created BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),ImageLockMode.ReadWrite, PixelFormat.Format16bppRgb565); int stride = bmData.Stride; System.IntPtr Scan0 = bmData.Scan0; unsafe { byte* p = (byte*)(void*)Scan0; int nOffset = stride - b.Width * 2; int nWidth = b.Width * 2; for (int y = 0; y < b.Height; ++y) { for (int x = 0; x < nWidth; ++x) { if (p[1] == 7)//this means that pixel is black (probably a transparent object or reflect too much light i think) { picArray[x][y] = 0; } else { int s = p[0] + p[1] * 255; //picArray is a 640x480 array of integers picArray[x][y] = s;//disparity } ++p; ++p; ++x; } p += nOffset; } } b.UnlockBits(bmData); /////////////////////////////////////////////// //now convert to actual distance using the code i found here but i added 8cm because it worked for me
int disparity = picArray[x][y]; int calc = (int)(100 / (-0.00307 * disparity + 3.33))+8;
//note: u have to allow unsafe code for this to work, and dont get alarmed if u show this image in a picture box and its only blue and green
hope this helps others ,thanks to code labs for providing this driver =)
Hi, I’m experimenting with Kinect and here is my safe Image to Array transformation:
void RAWToArray(NUIImage Source, int[,] DataArray)
{
int Width = (int)Source.BitmapSource.PixelWidth;
int Height = (int)Source.BitmapSource.PixelHeight;
int Stride = Width * 4;
int[] TmpArray = new int[Width * Height];
byte[] BytesFromInt = new byte[4];
Source.BitmapSource.CopyPixels(TmpArray, Stride, 0);
for (int j = 0; j < Height; j++)
{
for (int i = 0; i < Width/2; i++)
{
BytesFromInt = BitConverter.GetBytes(TmpArray[i + (j * Width)]);
DataArray[(i * 2), j] = BytesFromInt[0] + 256 * BytesFromInt[1];
DataArray[(i * 2) + 1, j] = BytesFromInt[2] + 256 * BytesFromInt[3];
}
}
}
I dont know whether my code produce performance lost, but maybe anybody needs something like this for aplication build with safe option. Anyway your code is helpfull for me. Thank you and Code Laboratories to.
|
|
|
|
|
Posted: 05 February 2012 05:27 AM |
[ Ignore ]
[ # 15 ]
|
|
New Member
Total Posts: 2
Joined 2012-02-05
|
「IMPORT(bool) GetNUICameraDepthFrameRAW(CLNUICamera cam, PUSHORT pData, int waitTimeout = 2000);」
PUSHORT pData,
is this get depth value, right?
the maximum ? the minimum?
|
|
|
|