Inspiration

Drones are essential for firefighters, Amazon and other delivery services, and military operations. Traditionally, GPS, Cameras, Radar/Lidar, and arrays on drones are heavy and reduce flight time significantly. We are identifying the location of a sound executed at an angle in 3-d space, and using two microphones with known distance to identify the angle and direction of the sound. If the data we obtain from two microphones, and a sound that is executed at different angles, will yield different angle calculations. Also, if the sound is shot through a cylinder with a microphone inside of it, it will yield different sound patterns of different strengths due to diffraction. We would like to see if we would be able to reverse the data and identify the shape of the cylinder based on the data obtained.

We would like to see if machine learning techniques such as Neural Networks (Deep Learning) or ConvNets will be able to identify the patterns in the data we receive and translate the results into identifying the shapes or angles from the data we feed into it, using Google’s Tensorflow Machine Learning functions.

What it does

Locate a drone (microphone) in 3d space, using sound.

How we built it

Python Tensorflow Modeling, Google colab

Challenges we ran into

Understanding the data which was provided in .mat (MATLAB) format including 4-d tuples.

Accomplishments that we're proud of

Working with machine learning tools, obtaining 96% accuracy using machine learning techniques upon the dataset.

What we learned

We were able to accurately predict the location of the drone with the dataset using ML techniques. This technology can potentially increase the flight time of the drone by reducing the weight a drone may be required to be equipped with.

What's next for Locating Drones with Sound

Identifying objects with the sound data only, or perhaps use different sets of data of different sized cylinders, objects, etc

Built With

Share this project:

Updates