CLNUI Depthmap right side black bar
Posted: 23 November 2010 10:05 AM   [ Ignore ]
New Member
Rank
Total Posts:  23
Joined  2010-03-20

Have anyone noticed that the depthmap from CLNUI has a black vertical bar on the right side?
This can be be seen in the original videos and in your experiments aswell, so im guessing it’s something on the library itself.

AlexP, have you noticed that?

cheers,

Profile
 
 
Posted: 24 November 2010 12:40 PM   [ Ignore ]   [ # 1 ]
Administrator
Avatar
RankRankRankRank
Total Posts:  585
Joined  2009-09-17

Hey pixelnerve,

As the CLNUI is still in a preview mode, I am working on adding mode features as well as fixing issues that come up. Stay tuned for any updates.
Thanks for your feedback.

AlexP

Profile
 
 
Posted: 25 November 2010 01:20 AM   [ Ignore ]   [ # 2 ]
New Member
Rank
Total Posts:  6
Joined  2010-11-25

Hey, do you have any idea why the depth and colour do not match?
I mean, when you put the depth on top of colour, the two images seem to have a different camera frustums (i.e. the depth has a different zoom than the colour).
Is this cause the cams need calibration step?

Cheers!

Profile
 
 
Posted: 01 December 2010 09:40 AM   [ Ignore ]   [ # 3 ]
New Member
Rank
Total Posts:  2
Joined  2010-12-01

Tommygun,

The reason why the RGB and the depth image do not match is caused by three reasons:

- there is a parallax between the two sensors (as they are located at different places)
- the focal lengths of both sensors may not be the same which leads to scale differences (and is the case for our Kinect)
- misalignment of the sensors: e.g. our RGB sensor is aiming slighty higher than the depth sensor

We’ll publish our results of accuracy analysis a.s.a.p. but it’ll take some time.

In theory both images would match perfectly if both sensors would have the same sensor dimensions, would share the same optical path (e.g. via beam splitter) and would capture the exact same scene (perfect alignment of the sensors required)

best regards and greetings from Berlin,

Daniel

Profile
 
 
Posted: 01 December 2010 09:59 AM   [ Ignore ]   [ # 4 ]
New Member
Rank
Total Posts:  6
Joined  2010-11-25

Hi Daniel,

yeah I have figured out that, and in fact the only way to get perfect alignment of the two images (depth and RGB) is by calibrating the two sensors and projecting the 3D points built from the depth into the RGB “space”. Like this one could ideally build a point cloud with texture as well.

OpenCV can help doing that…

Cheers,

-Fabrizio

Profile
 
 
Posted: 01 December 2010 10:09 AM   [ Ignore ]   [ # 5 ]
New Member
Rank
Total Posts:  2
Joined  2010-12-01

Yep,

a so called projection transformation can be used for that particular problem. The simplest solution uses four homologous points (measuring the same 4 points in both datasets) in order to determine a projection matrix, which has apparently been done by Oliver Kreylos in his solution. I’m not sure if OpenCV has some classes for your problem.

best regards,

Daniel

Profile
 
 
Posted: 01 December 2010 10:20 AM   [ Ignore ]   [ # 6 ]
New Member
Rank
Total Posts:  6
Joined  2010-11-25

Hey,

yeah it does.
It has some functions that can extract both intrinsic and extrinsic parameters, together with the rotation and translation matrices R and T to go from one camera to another.
Should you need it, this page is a very good reference for this problem:

http://nicolas.burrus.name/index.php/Research/KinectCalibration

Cheers,

-Fabrizio

Profile
 
 
 
 


RSS 2.0     Atom Feed