Did some signal analysis research a while ago, and thought it would be cool to base a project on
What it does
Using an html5 page, devices detect specific sounds, and send the times and locations at which they were heard to a central database, from which the data is retrieved for processing.
How we built it
Figured out how to use audioContexts to get audio data from microphones, then did some math to figure out when the data contained the sounds we were looking for. Send time and location of the device to a linode server. Retrieve most recent data from each device from the server with a different web client, do some more math to figure out where the sound came from, the print the source location to a canvas.
Challenges we ran into
We were unable to test final functionality due to lack of quiet space and sufficient number of time synced computers.
Accomplishments that we are proud of
It looks pretty sometimes
What we learned
We need to learn more about the tools we'll be using beforehand next time to avoid wasting so much time
What's next for PHONAR
Let's see what happens when we use a ton of perfectly synchronized computers in a huge sound booth