CL-Eye Platform - Quad Pack Contest
Posted: 15 March 2010 11:30 AM   [ Ignore ]   [ # 16 ]
New Member
Rank
Total Posts:  1
Joined  2010-03-15

Hi ,
First post , can’t start without a big thank you to the CL team. Good Job !

These are “Research ideas” I will follow up the post with Application ideas ...

1- Building 3D Capture system of Moving elements. Imagine a system that can capture the 3D model motion of a moving fish in a jar.  looks like the work done her http://www.dickinson.caltech.edu/Research/MultiTrack

2- Increment frame rate . Given the possibility of synchronizing the cameras , it could be possible to fire each camera at a different start time (0.25 frame time shift) making the 4 cameras provide a 300 FPS stream for the same object.
This will require a prism or mirror mechanism to allow the 4 cameras looking from the very same point of view. Now making videos of an exploding ballon filled with water is one step further grin

3- I would definitly use it for my ATRV-Mini Robot as a vision system , providing fast image feeds from different directions while the vehicle is on the move.

4- Stracking the cameras to get higher resolution image with better colors, with much more dynamic ranges since each camera will adjust its exposure independantly.

Wish me luck .. with respect !

Basem

Profile
 
 
Posted: 17 March 2010 07:01 AM   [ Ignore ]   [ # 17 ]
New Member
Rank
Total Posts:  1
Joined  2010-03-17

Hello and Thank You for reading.

My idea, is to bring human-computer interaction to the next level.

Have 4 screens (composed of the standard Acrylic, IR LED’s (or lasers), 4 Projectors, and of course 4 Playstation 3 Eye’s) all of which are facing opposite of each other.

Picture a cube with no top or bottom, each side is tilted away from where the user would be standing/sitting to allow a more comfortable experience.

All the screens are running from either one computer, or four separate ones connected together through some sort of software. You can then allow all four users to play games together, share ideas for business faster and more productively, and ensured a more seamless interaction between the computer, and the users.

Great for parties, perfect for business’s big or small, to share and collaborate their ideas.

Good Luck to everyone else who enters!

-Will Qawasmi

Image Attachments
Diagram.jpg
Profile
 
 
Posted: 18 March 2010 08:55 PM   [ Ignore ]   [ # 18 ]
New Member
Rank
Total Posts:  5
Joined  2010-01-07

Outer 2 cameras w/ filters zoomed out & optimized for 3d, with the 2 other cameras without filters combined like an slr with one zoomed in & the outer zoomed out.  If we can add enhanced audio array processing, then we can tie sounds to blobs.  Take the assemble and place it on a small servo controlled gimble, and presto we have a 3d motion & sound tracking eye that can follow and track multiple objects based on their volume or motion.

This can all be done with the usb bus and products, what we really need is a cheap usb host that we can run the program on (small pc, or even better a mobile phone with micro-usb AB support and conversion of the API to that OS’s language (or Silverlight on the new Windows Phone.)

 Signature 

Robert L. Sawyer III

SawyerIII @ Assorted

Profile
 
 
Posted: 23 March 2010 11:31 PM   [ Ignore ]   [ # 19 ]
New Member
Rank
Total Posts:  1
Joined  2010-03-23

Hi,
Nice concepts.
I have some ideas

1. I would like to have 4 cameras on Car to create system which can reduce accidents and smooth driving.

The four cameras will be as follows,

a.  First will be obviously on front near drivers end to keep track of objects in front (ahead and bit farther) - it will alarm for the sudden appearance of any objects like some child or person trying to cross road and can use to automatic stopping of car if needed.
b.  Second wil be also on front but it will be bit lower like near number plate, which will keep track of road, speedbrakers or some object which can harm tires or can hamper speed. Like after getting speed breakers it should reduce the speed.
c. Third should be on the back tracking objects coming from the back , it can alarm the driver if the vehicles behind you is not maintaining proper distance.
d. Fourth can be added below car to check health and security . It keep track of tyres, and other unwanted things.

we can combine the inputs from those cameras and process it which can make a auto-pilot kind of system or accident proof car .

2. Its related to defence and aerospace division where we can use four cameras at diff location or in diff direction across some point to cover 360 degree view and can track flying objects or UAV’s. Generally these are being done by video tracking and moving gymbal system but with four cameras we can create a cheap system.
Insted of critical video tracking we can use simple object recognition and based on its delay and position we can calculate flying objects speed, trajectory, etc.

3 . we can use multiple cameras in guided weapon systems where we can track weapons path by some surface cameras plus camera on the weapon which will give better options for controling the weapon on a long range target .

Image Attachments
car.jpg
Profile
 
 
Posted: 31 March 2010 11:24 AM   [ Ignore ]   [ # 20 ]
New Member
Rank
Total Posts:  1
Joined  2010-03-31

I would create a MT table platform that allowed for 3d detection of inputs, using multiple LLP stacked ontop of each other inside of a table similar to a craps table (edges of the table sticking up some 8 inches or so, with 3d detection occuring within that box).  With multiple cameras being used to detect the points where the planes are broken the depth of those points can be determined.  The table would still be a standard “surface” like table, except I would use a 120hz projected image on the glass base to allow for active 3d images (with the appropriate eyewear).  Depending on the fidelity of the depth detection of 2 cameras given the confined detection area I may not need to use more than 2 as IR detectors, and the remaining 2 would keep their original filters and be used as object detection, for example dice rolled within the surface.

If fiducials are not clear and distinct enough using this method to properly determine orientation/position, I would change over to the visible light spectrum, using a visible laser light plane (I could use something like all red lasers, or alternate laser color to aid in depth detection) and white gloves.  I should easily be able to pick hands out like this, and as I develop I could improve detection of other objects.

For starters I would attempt to replicate standard tabletop games, something like ping-pong for example could be played, and I would be able to determine a users palm orientation/position within the 3d area to act as a paddle.  Pool could be played using a person’s outstretched finger as the cue, allowing for english to be incorporated.

Thanks for listening!

Profile
 
 
Posted: 08 April 2010 08:02 PM   [ Ignore ]   [ # 21 ]
New Member
Rank
Total Posts:  1
Joined  2010-04-08

Hi I am going research in brain computer interface and i would use the cameras to explore physical reaction with the EEG mental ones to build programs and games that react to your mental 3 dimensional world while capturing your 3 dimensional self moving in space. It be essentially blending project natal with a brain computer interface to take the next step into interactive games for the first step has been taken.

Profile
 
 
Posted: 18 April 2010 10:13 AM   [ Ignore ]   [ # 22 ]
New Member
Rank
Total Posts:  3
Joined  2010-04-18

Hey folks, looks like an awesome library!  And I found this sdk right as this cool contest is going on.  I’d love to get my hands on a free set of these cameras.  Hope you like my idea:

Idea for four eye application

I would create a PC based robotic consciousness platform with multiple sensor integration and multiple levels of visual and sound attention focus.  This platform would enable exploration of pattern recognition and behavior algorithms.

Vision:
The first two cameras would be mounted high on a simple chassis as a fixed width stereo vision pair.  This pair of cameras would be fixed in the “56-degree field of view” mode for wide field imaging of the robot’s environment.  This imaging system would be mounted on a pan-tilt servo system capable of driving the weight of the stereo pair to cover a wide chunk of solid angles in the robot’s environment.  The speed requirements for motion on this servo system are low, so the weight is less of an issue (low speeds require less torque).

The second two cameras would be mounted on independent fast servo systems with minimal mass (just the single cameras) and their lenses would be fixed in the “75-degree field of view” mode for close-up inspection of an object.  These individual cameras could be driven to track objects moving quickly in the space with high precision and to provide 2D close-up texture inspection.  These cameras may even be mounted at the end of longer arm-like structures so that they could be moved into complex positions in space.  Basically, this will give the robot the capacity to have selective high resolution in its visual field (like the human eye where we have high resolution color vision at the targetable center and a broad low resolution peripheral vision with a relatively static field determined by head pose).

Audio:
The first pair of linked stereo vision cameras would be mounted such that they were pointing in the same direction, but rotated 90 degrees from one another on the lens axis.  That is, one camera would image the stereo field at 640x480 and the other would image the field at 480x640.  This would orient the audio arrays orthogonally so that digital beam forming could produce sound localization (or spatial sound filters) in two dimensions (left to right and up and down) giving the robot the capability of complex sonographic imaging of its environment over a broad range of solid angles.  Difficulties would involve the exact time synchronization of the 2 sets of 4 microphones but it might be that the audio sample clock (16kHz) on one board could be hacked and that set of 4 mics could be slaved to the other set of 4 mics.  The physical separation of the two microphone arrays would also give increased spatial resolution to a sound source.

A single speaker system would be mounted below the stereo pairs (i.e. act as a mouth) and could be used to generate pulses for reflected sound/sonar mapping or for speaking into its environment in response to a stimulus.  This speaker could merely be plugged into the associated PC speaker port.

Sensor Modifications:
The microphone sensor arrays on the independent mounted cameras might provide redundant information to the 2d-phased array on the robot’s stereo vision head.  The microphone merely outputs a time varying voltage encoding sound pressure.  I assume that the microphones may be hacked off of the internal board and replaced with a variety of signals providing a voltage level encoding a given metric from the world.  Essentially, the microphone inputs can be converted into general purpose analog inputs to acquire from a variety of inexpensive sensors.  One may have to bypass AC coupling filters on the input and it may be that PC “volume” control on that sound interface controls an onboard amplifier.  Some examples of sensors that could replace the microphones:

  • Ultrasonic Range Finder
  • several available systems generate a simple voltage encoding distance instead of the high frequency sound info.  Would provide high resolution distance measure that would compliment the stereo data.
  • Temperature Sensors
  • Either remote targeting IR beam systems outputting a voltage of temperature at the cameras target or a simple resistive temperature device powered of the USB 5V bus for measuring local ambient temp.
  • Feedback from a variety of contact based analog pressure sensors (for tactile feedback) or switch closures indicating some other manner of physical interaction with the robot’s system.
  • Humidity sensors, air flow meters
  • Ambient light sensors to determine illumination source location
  • IR sensors (i.e. for remote controls or IR communications)
  • Capacitive proximity sensors
  • Many canned systems generate a voltage encoding “capacitance” of a system based on a couple of electrodes and a 120kHz (or so) voltage.  These are, for example, used to detect people or child seats for airbag deployment in cars.
  • Magnetic sensors could be used to detect the magnetic field of the earth or to measure currents flowing in wires or for some other applications.  This could assist in navigation.
  • Tilt sensors or x/y/z accelerometers could provide analog signals encoding orientation of the robot

  • Supplementary Systems:
    I would add a small 8-bit Atmel AVR controller based system in parallel with the cameras that would communicate with the PC based control application by a USB serial stream.  This controller would provide the driving control signals to direct the RC servo systems to target angles in space.  In general, this system would allow for the creation of complex responses to the acquired signals on the cameras and microphones.

    Software:
    A C++ based application platform would be constructed to acquire from the cameras using the CL-eye drivers and the windows audio interface.  OpenCV algorithms would be leveraged for stereo vision and 3D kinetics calculations (based on the known geometry of the cameras and the RC servos) would drive the selective focus.  Then, a wealth of high bandwidth (i.e. 8kHz) analog signals from audio and other sensors would be acquired in parallel to the video.  All of these input data and output control modalities would be fed into a common space where modular, parallel analysis could be conducted.

    Once such an application platform was in place, you have many multi-modal information streams (color vision, depth, localized sound, temperature, proximity, etc) and the real fun can begin.  At this point, I would have a real-world platform for the implementation of pattern recognition and machine learning algorithms from which I could begin testing out AI algorithms.  A simple multi-core based PC system would drive this application so that a broad range of data could be stored on disk and data structures could be created for search and recall of stored pattern information.

    Then the real fun begins and I would begin testing out training methods for showing this critter how to play in the real world.  Audio and visual queues could be derived from the environment to provide positive and negative evaluations to the robot’s actions.  More complex actuators could be added to the system, etc.  Various levels of signal processing would feed data into the learning algorithm and all could be implemented in parallel (much like the human brain).  Language could be detected from the audio, etc.

    Since it would be driven by a wi-fi enabled PC, such a system could have a wealth of information access on the back end.  For example, it could focus in and identify an object, receive a label via audio (i.e. “that is a cat”) and then search for more examples in google images or otherwise to further solidify its model of what a cat is.  A commercially available GPS system could be connected by a serial port interface for robot navigation.  Eventually you could ask it simple questions like “what’s the weather like” … The possibilities extend endlessly, all enabled by these extraordinarily cheap multi-mode sensor and acquisition systems.

    The system would probably have to be a PC in order to allow for the installation of 4 independent USB buses on a PCIe bus in order to handle the high data rates from the individual cameras simultaneously.  A laptop probably wouldn’t easily provide such bandwidth over USB.  A single one of these cameras probably saturates the 480Mb/s USB bus.

    In summary,

  • 4 Eye Cameras
  • Some hardware to mount stereo vision pairs (aluminum/wood and some clamps and other cheap bits)
  • RC-Servo motors in pan-tilt configuration – price driven down by RC helicopter/car market
  • Optional extra sensor components (in general a few dollars each) – Associated PCBs and such
  • Many times, these sensor chips can be sampled for free from the vendor
  • Small cheap dev board containing a controller and wires to drive the system motors (see Arduino)
  • A multi-core PC (laptop or otherwise) capable of driving the system and providing further back
  • All components are readily accessible to a hobbyist budget (i.e. under $500 total w/out PC).  Opens up endless possibilities for cool pattern recognition and behavioral systems work on a sensor rich platform.

    Profile
     
     
    Posted: 26 April 2010 12:40 PM   [ Ignore ]   [ # 23 ]
    New Member
    Avatar
    Rank
    Total Posts:  20
    Joined  2010-01-06

    Because a motion capture project which I’m currently building was mentioned in this thread by someone else I’ll propose my other idea:

    High speed VGA camera with programmable processing pipeline

    To build this we need:

    - 4x ps3 eye camera
    - Focusing screen
    - Lens
    - custom casing
    - Frame grabber/processor - standard PC with gigabit ethernet LAN and four USB 2.0 controllers

    Software:

    - Windows Embedded Standard 2009 - operating system without useless drivers and functionalities (like GUI) - small and fast
    - CodeLabolatories Quad Pack
    - software to handle frame processing and I/O - custom processing using SIMD extensions and/or nVidia CUDA (or OpenCL)

    Target: 640x480@187 FPS - cheap high speed camera system with reasonable resolution - can be expanded to 1280x960 using bigger focusing screen and more cams+grabbers.

    Setup idea is presented in attached image.

    We also need to synchronize all cams by connecting VSYNC signal of first camera to FSIN pin of the second one and so forth in daisy chain in way like we’ve discovered and described in thread: Multicam sync

    Software on frame grabber will be capable to process captured frames in real time, for example blob tracking or mocap marker tracking and send results using gigabit LAN to host computer.

    Image Attachments
    fourcamshighspeedcamerasystem.png
    Profile
     
     
    Posted: 02 May 2010 10:54 AM   [ Ignore ]   [ # 24 ]
    Sr. Member
    Avatar
    RankRankRankRank
    Total Posts:  162
    Joined  2009-09-17



    The contest is now official closed and we have begin judging all entries. Please be patient as we decide the results, at which time we will announce and contact the winner for his/her delivery information. We truly thank all participants and hope everyone involved had fun while coming up with these great ideas!

    Profile
     
     
    2 of 2
    2
     


    RSS 2.0     Atom Feed