Hi Guys,
Do you think using a single thread to capture from 4 cameras would be a lot less effecient and processor intensive than capturing using multiple threads as in the c++ example? OR would it still be capable of capturing from 4 cameras together.
So far the program i’ve written is based on the multiple threads example and I was aiming to tile the 4 images together into one large image using something like the DisplayManyImages function you have mentioned .
Say in the following image capturing loop section of the c++ sample code in the capture(), where I’m doing a bit of image processing, currently each camera image appears in a new window:
// image capturing loop
while(_running)
{
// Get Raw Image
cvGetImageRawData(pCapImage, &pCapBuffer;);
CLEyeCameraGetFrame(_cam, pCapBuffer);
// Clone captured image
IplImage *t = cvCloneImage(pCapImage);
// Show Raw image
cvShowImage(_windowName, pCapImage);
// Undistort captured image
cvRemap( t, pCapImage, mapx, mapy );
cvReleaseImage(&t);
cvShowImage(_windowNameU, pCapImage); // Show corrected image
}
I was wondering if there Is there an easy way I can access the pCapImage of each camera instance here?
So i would have the 4 pCapImage images available to me at once?
something like say cam[0]->pCapImage and cam[1]->pCamImage
I know this is not correct syntax but its just to try to give you an idea of what i’m thinking….
EDIT: if its easiest to use single thread to capture from multiple cameras is there any basic example of what i need to setup in order to capture from cameras, as I am having trouble trying getting it working correcly.
Thanks again,
Alan