640*480 @15fps: 6%CPU Latency: ~130ms or around 2frames: rendering delay ~8ms; capture/transport delay ~122ms
640*480 @30fps: 11%CPU Latency: ~66ms or around 2frames; rendering delay ~7ms; capture/transport delay ~59ms
640*480 @40fps: 16% CPU Latency: ~52ms or around 3frames; rendering delay ~6.5ms; capture/transport delay ~45.5ms
640*480 @50fps: 19% CPU Latency: ~60ms or around 3frames; rendering delay ~10ms; capture/transport delay ~50ms
640*480 @60fps: 30% CPU Latency: ~96ms or around 6frames; rendering delay ~26ms; capture/transport delay ~70ms
640*480 @75fps: 50% CPU Latency: ~96ms or around 7frames; rendering delay ~32ms; capture/transport delay ~68msSomehow huge jump in CPU usage from 60FPS to 75FPS. But this I’ve already reported in one of previous threads with CL SDK…
Also, jumps in CPU usage for 60 and 75FPS could be also explained: as latency grows to 6-7 frames, my algorithm itself spends more time in checking blank frame arrival…
I assume you run dual core processor. In that case according to your last result (75fps) it would mean that the core that was doing the processing was running at 100%. I am really curious to see your algorithm for checking blank frames. It seems to me that you are spending way too much time on that. Under such a conditions I would totally reject your latency finding @ 75fps.
If you want to do this research that’s fine, but as a researcher one of the basic things you have to know is that you have to objectively consider and examine all the facts and not prematurely jump to conclusions. It seems to me that you have a tendency to do just that (jump to conclusions).
Your result at 75fps was wrong because your CPU usage was 100% and under such a conditions you cannot possibly expect to make any accurate measurements especially when you’re going through layers and layers of API calls under the system that is not even soft real-time.
So as I said I’m not sure why are you performing these tests this way. For more realistic average latency test you would have to present a blank frame at random time, capture, and measure its time of arrival. Then you would sum those delays up and average them. Since your monitor is running at 75fps (it’s redrawing at periodic interval) this is already not random and your latency spread will be skewed. However the best test (involves custom hardware) would be to pull the vsync signal from the camera and measure the time difference between this signal and the time your captured frame is delivered to your buffer.
As I said before, I would like to see how are you checking for the blank frame (your algorithm) since obviously your CPU is not fast enough to do this in real time. So probably optimizing this will drop the CPU usage down and will give you more accurate results.
AlexP