D.J.T.J - 06 February 2010 11:43 AM
Im trying to simply use the camera’s sdk without the cv part and the capture command keeps failing, why is this? All the examples show that you simply pass in a PBYTE variable but before this happens a cv command does something to the PBYTE with the cvGetRawImage() command. What is this doing? and should capture work if I pass it a PBYTE variable or does it require this function?
**Ok so I have discovered what that function does and it should not be affecting why the cameras capture image function is failing, so any idea’s as to why this is? In the new sdk examples PBYTE is never instantiated but in the old ones it was, does Capture image want a pointer to an area in memory big enough for the image, or does it want a null pointer to the then point to the image it will place in memory itself?
****Okay so people dont seem to reply that often on this blog, but I am yet another step closer to getting this to work, in fact technically I have the CLEyeCamere Get frame function working but the area of memory it needs is bigger than the resolution times 3, for 24-bit colour. Why is this? It shouldnt need more the RESOLUTION WIDTH*RESOLUTION HEIGHT*BITDEPTH/8, unless it nows stores extra data the old SDK didnt store, any help, please?
As you saw in the cv examples, the pointer that you are passing to the get frame have to point to your preallocated memory area. The SDK has nothing to do with OpenCV nor it depends on it. The OpenCV framework is used in the code samples for convenience and simplicity.
The get frame function will fill your memory with the camera image. As to the size, there are two color modes:
- Grayscale -> size = width * height
- Color(RGBA) -> size = width * height * 4
Hope this helps.
AlexP