We were interested in way animals can obtain information about their environment using sound, and wanted to try to mimic that as cheaply as possible with our laptops by using the integrated hardware.
What it does
It opens a visualization (built in pygame) that shows a dot indicating which angle the loudest sound came from.
How We built it
We used pygame to build the visualization, pyaudio for fetching the webcam audio, and numpy and scipy for maff
Challenges I ran into
the stereo webcam microphones on our laptops are only about 8cm apart, which with a 44100
also, it turns out that the differences in webcam microphone quality is pretty significant. Even though pete's were way older, jesse's were less sensitive to sound and harder to use because of that.
Accomplishments that I'm proud of
We worked out something that initially seemed conceptually difficult, and learned new math and a new library along the way.
What I learned
We learned about Fourier analysis (though we ended up not needing it), which neither of us knew anything about before the Hackathon. We also learned about audio processing, which we didn't have any experience with.
What's next for ( ͡° ͜ʖ ͡°)
We'd love to be able to pick out a specific sound to locate, but this is really hard. We have some ideas about using specific supersonic tones, but those probably wouldn't work out within the time limits of the hackathon and without specifically chosen speakers
it would also be cool to get a third microphone up in the mix so we could potentially add an extra dimension to our output ( ͡☉ ͜ʖ ͡☉)
maybe doing this with a raspberry pi to make it more portable or something? idk
to try it on your own laptop
Remember that you need stereo webcam microphones, which it seems like macs don't have, as well as some other computers
also, you're gonna have to have
- scipy installed in whatever python environment you're using. ¿how do packaging software?
then you can give it the old run-a-roo with