Hey everybody,
So this is probably better posted on the OpenCV yahoo groups, however I find it a real pain to navigate over there so thought I would try here to see if anyone has had experience with this before.
What I’ve done so far is capture an image from a modified (new lens and IR bandpass-filter) PS3-EYE and threshold and blur the image output. So far the image below is my output after the processing.
The image above is in a fully lit room and the point is a modified IR LED.
What i need to do now is locate the X and Y co-ordinates of that point on the screen…
I tried to use hough circle transform, however the tracking point is often not circular enough to be detected properly (especially in motion) so i was considering some form of blob tracking.
continuous tracking is a higher priority for me than accuracy for the moment.
I have seen a number of tracking examples where the object is tracked and a bounding box is applied and the central co-ordinate of the box is calculated. This is the sort of thing i would like to do. I guess its much like the FaceTracking example however not using the Haar classifier (blobs!).
also bare in mind the final implementation may have two or more points to detect independently.
Hope someone with better experience can help guide me on this one, I have the OpenCV book so specific functions would be perfect for me.
-K