igor1960 - 03 February 2010 01:37 AM
Alex, I’m pretty experienced with DirectX/Direct3D, so I pretty much know all inefficiencies and etc.
The purpose of latency test using DirectX is exactly to see and help you, as from what I understood that is your main targeted client.
As to, do I need to avoid rendering or not: that is program specific feature, as some program require full latency calculation including rendering. In fact. most program excluding just image processing require rendering and latency is most important for such cases.
So, let me then be more specific: even with your proposed solution with LEDs and without DirectX: did you measure and can you publish some latency figures? What I meen here: assuming the target has changed, is it exactly just 1 frame required to fully reflect change of the target image or is it more?
I understand that theoretically that should be just 1 frame. But practically, there might be some kind of buffering on both sides of USB transport, so maybe there is more then 1 frame in the buffer and therefore target change will reach the host after more then 1 frame.
If there is some buffering, maybe size of the buffer is the function of frame rate and then this would explain why on higher FPS I’m getting higher latency.
This was my question.
The reason why I’m asking for a purpose of your test is that based on your setup you cannot conclude anything with any kind of confidence. Meaning you are measuring the total latency of the capture-display loop. This might be useful for some real-time UI/Game interaction via multitouch for example. But again it doesn’t give you any hard numbers on the performance of each sub-system.
I would rather like to see the isolated latency measurements of both capture and display and then their combined latency. This way we can exactly pinpoint what latency time changes.
The main client for this camera is not DirectShow, it is in fact the API provided by the SDK. The DirectShow is mostly meant for end users who are not so much interested in high performance but the ability to use this great camera with various web chat/video capture programs.
The internal CL-Eye SDK API does not introduce any time/fps varying buffering mechanisms. It will always give you the last frame captured. Due to the nature of the camera design, there is no internal buffer on board and any data captured is transmitted immediately and passed to the user buffer upon complete receipt of the video frame. Exactly because of this the camera might be very unstable under high CPU usage conditions, since any data lost will result in camera image being corrupted/dropped.
From my tests (with CL-Eye SDK) the time it takes for frame data to propagate through the image conversion and lens distortion algorithms is about 2ms (tested on I7 920 CPU, Vista Ultimate x64). So in the worst case you will have the latency of the time to capture the image + 2ms. Now of course, on the slower CPU this time will vary and if the CPU cannot keep up, you will see the increased latency time (something like you shown in your results). Due to the CPU being busy, it will take longer to do other tasks involved (rendering, display, etc) so this timing will not increase linearly with the frame rate but it will be more complex function of fps.
I would be really interested to see the results of the same test using the SDK for capture and your custom DX renderer for the display.
AlexP