I read on one forum that someone hooked Audacity up to his PS3Eye and was able to read 4 different audio streams with your driver, @ http://www.rrfx.net/2009/11/ps3-eye-4-channel-audio-tests-on-ubuntu.html , it made me think about array microphones articles I read from microsoft like http://www.microsoft.com/whdc/device/audio/MicArrays.mspx .
These things made me wonder if the amazing things your driver has done for multi-touch will extend to the unique microphone array that the ps3 eye also has built in. Combining the technology of audio and video into the foundation api now can also be pushed with the same hardware. Imagine a room with a couple of these in key locations should be all the hardware necessary to track blobs with the sound the blobs are making, in a virtualized multi-user gui and the entire wall is interactive and .... whew, started going overboard. It would be neat though.
Here are my questions…
Does the driver parse the info correctly to allow for Windows 7 microphone array to work properly?
Have you ran the microphone array validation tool, http://www.microsoft.com/whdc/device/audio/MicArray_tool.mspx ?
Is this level of audio already considered in the foundation, (which would make me just a ranting idiot) ?
Anyway, thanks.