CL-Eye-Platform-SDK / OpenCv Tutorials
Posted: 14 January 2010 05:41 PM   [ Ignore ]
Jr. Member
Avatar
RankRank
Total Posts:  40
Joined  2009-12-28

Hi all,

I’ve been learning OpenCv, studying the CL-Eye-Platform-SDK’s API and MultiCam C++ Example. I am trying to modify the MultiCam C++ example code so that the frames from the 2 cameras will appear in the same window side-by-side instead of two windows like in the example! I have been unsuccessful so far in my attempts.

Can anyone help me? Do you have or know where to get good tutorials/Sample code of working with the MultiCam SDK and OpenCV?


Thanks in advance for any help/advice grin

Regards,
Alan

Ps: I usually code in ActionScript3.0 / Flex 3 so I am slowly trying to remember my C++ Knowledge!
    I could write my application in AS3 however It will only work with a single PS3Eye camera using DirectShow, Am I correct in saying this?

 Signature 

“Strive for perfection in everything you do. Take the best that exists and make it better. When it doesn’t exist, design it, build it and Open Source it!” http://www.alangunning.com

Profile
 
 
Posted: 14 January 2010 09:42 PM   [ Ignore ]   [ # 1 ]
Administrator
Avatar
RankRankRankRank
Total Posts:  585
Joined  2009-09-17
Gunning - 14 January 2010 05:41 PM

Hi all,

I’ve been learning OpenCv, studying the CL-Eye-Platform-SDK’s API and MultiCam C++ Example. I am trying to modify the MultiCam C++ example code so that the frames from the 2 cameras will appear in the same window side-by-side instead of two windows like in the example! I have been unsuccessful so far in my attempts.

Can anyone help me? Do you have or know where to get good tutorials/Sample code of working with the MultiCam SDK and OpenCV?


Thanks in advance for any help/advice grin

Regards,
Alan

Ps: I usually code in ActionScript3.0 / Flex 3 so I am slowly trying to remember my C++ Knowledge!
    I could write my application in AS3 however It will only work with a single PS3Eye camera using DirectShow, Am I correct in saying this?

In order to get two images into one larger one you would have to create large image (say 1280x480) and then copy smaller images into it one by one. More advanced method will be to blend them in thus blending the edges (overlapped images).
I was thinking on making a few posts/tutorials in our Research area about CLEye Multicam SDK and OpenCV.

AlexP

Profile
 
 
Posted: 14 January 2010 11:54 PM   [ Ignore ]   [ # 2 ]
New Member
Rank
Total Posts:  4
Joined  2010-01-08

well here is a place start http://opencv.willowgarage.com/wiki/DisplayManyImages
Copies multible images to one bigger max.12.
The sample CLEyeMulticamTest is using threads to capture so you need bit more… but the idea is same as in that sample wink
(bit busy atm but I could try to explain later bit more about threads, pointers and shared resource)

Profile
 
 
Posted: 17 January 2010 02:59 AM   [ Ignore ]   [ # 3 ]
Administrator
Avatar
RankRankRankRank
Total Posts:  585
Joined  2009-09-17
Tommy - 14 January 2010 11:54 PM

well here is a place start http://opencv.willowgarage.com/wiki/DisplayManyImages
Copies multible images to one bigger max.12.
The sample CLEyeMulticamTest is using threads to capture so you need bit more… but the idea is same as in that sample wink
(bit busy atm but I could try to explain later bit more about threads, pointers and shared resource)

Yes, that’s a really simple example of how to do image tiling. So in order to do this with two cameras you would actually rewrite the CLEyeMulticamTest to open both cameras and capture images from a single thread. After they are both captured you would tile them as shown in the OpenCV example.

AlexP

Profile
 
 
Posted: 17 January 2010 10:08 AM   [ Ignore ]   [ # 4 ]
Jr. Member
Avatar
RankRank
Total Posts:  40
Joined  2009-12-28

Thanks to both of you guys for your suggestions,
I am currently experimenting with the DisplayManyImages OpenCv example as we speak grin

I just wasn’t sure how to implement it, as the CLEyeMulticamTest code uses multiple threads, so I am now going to try recode it with a single thread as you have suggested AlexP.

Will using a single thread degrade overall performance/frame Rate by much when compared with using multiple threads, say when I eventually get around to capturing and tiling images from 4 cameras?

I think for now i will try with single thread, as it is simpliest to implement and learn, maybe then my code can be modified to accomodate multiple threads if It improves performance and I can get some help.

Thanks once again guys,

Alan

 Signature 

“Strive for perfection in everything you do. Take the best that exists and make it better. When it doesn’t exist, design it, build it and Open Source it!” http://www.alangunning.com

Profile
 
 
Posted: 17 January 2010 11:23 AM   [ Ignore ]   [ # 5 ]
New Member
Rank
Total Posts:  4
Joined  2010-01-08

Try to play with single thread first. I’m working with stereopitures at the moment and the easies way is to use single thread. My laptop is slow (core2 duo, 2,40Ghz) so it has hard time to capture 2 cameras at 640x480 30fps wink Any way video in 3d looks cool (stereo picture pairs). Next would need to do that red/blue glass picture, so need to compain two stream to one… need to see if this pure laptop will handle that… (if not need to drop resolution or framerate for testing).

Profile
 
 
Posted: 09 February 2010 06:32 AM   [ Ignore ]   [ # 6 ]
Jr. Member
Avatar
RankRank
Total Posts:  40
Joined  2009-12-28

Hi Guys,

Do you think using a single thread to capture from 4 cameras would be a lot less effecient and processor intensive than capturing using multiple threads as in the c++ example? OR would it still be capable of capturing from 4 cameras together.

So far the program i’ve written is based on the multiple threads example and I was aiming to tile the 4 images together into one large image using something like the DisplayManyImages function you have mentioned .

Say in the following image capturing loop section of the c++ sample code in the capture(), where I’m doing a bit of image processing, currently each camera image appears in a new window:

// image capturing loop
        
while(_running)
        
{
            
// Get Raw Image
            
cvGetImageRawData(pCapImage, &pCapBuffer;);
            
CLEyeCameraGetFrame(_campCapBuffer);
            
            
// Clone captured image
            
IplImage *cvCloneImage(pCapImage);
            
            
// Show Raw image
            
cvShowImage(_windowNamepCapImage);

            
// Undistort captured image
            
cvRemaptpCapImagemapxmapy );
 
            
cvReleaseImage(&t);

            
cvShowImage(_windowNameUpCapImage);     // Show corrected image
        


I was wondering if there Is there an easy way I can access the pCapImage of each camera instance here?
So i would have the 4 pCapImage images available to me at once?

something like say cam[0]->pCapImage and cam[1]->pCamImage

I know this is not correct syntax but its just to try to give you an idea of what i’m thinking….

EDIT: if its easiest to use single thread to capture from multiple cameras is there any basic example of what i need to setup in order to capture from cameras, as I am having trouble trying getting it working correcly.


Thanks again,
Alan

 Signature 

“Strive for perfection in everything you do. Take the best that exists and make it better. When it doesn’t exist, design it, build it and Open Source it!” http://www.alangunning.com

Profile
 
 
Posted: 09 February 2010 01:57 PM   [ Ignore ]   [ # 7 ]
Administrator
Avatar
RankRankRankRank
Total Posts:  585
Joined  2009-09-17

Alan,

That depends how much work you need to do for each camera and how fast your machine is.
On one hand, a single thread processing will give you some advantages such as minimal latency and code simplicity, but on the other hand the use of multiple threads will give you the ability to handle heavier processing load (assuming you have a multi core CPU).
Here is how you would handle multiple camera capture in a single thread:

// image capturing loop
while(_running)
{
    
// Get image from each camera
    
for(int i 02i++)
    
{
        cvGetImageRawData
(pCapImage[i], &pCapBuffer;[i]);
        
CLEyeCameraGetFrame(_cam[i]pCapBuffer[i]i==2000 0);
    
}
    
    
// Do your processing here...
    
    // Show camera/processed images
    
for(int i 02i++)
    
{
        cvShowImage
(_windowName[i]pCapImage[i]);
    
}


The multithreaded code is a bit more complicated since after the processing you would need to synchronize the threads in order to merge images together. You can create a separate thread that will query each camera thread for processed images and combine them into the resulting processed image.

AlexP

Profile
 
 
 
 


RSS 2.0     Atom Feed