The data points are in blue, and the relative pitch shifts from the base tone are in orange.
Chris needs to read trends in data all the time, but reading each data point value individually is time consuming and difficult to understand. We wanted to use audio to create a more intuitive way for him to detect trends and outliers in his plots.
What it does
Our tool reads a .csv file for x and y coordinate values. We then group the y coordinates into discrete bins representing how far they are from the mean of the data. We play one tone and shift the pitch per data point according to how far it is from the mean, higher pitches for values larger than the mean, and lower pitches for values lower than the mean. When the audio is played, one can hear patterns by discerning changes in the relative pitch changes. This is a good preliminary way to detect trends in the data which can then be investigated further by Chris.
How we built it
We built our tool entirely in Python. We used a lbrary called aupyom (Github repo: https://github.com/pierre-rouanet/aupyom) to manipulate and play the audio.
Challenges we ran into
One of the biggest challenges that we ran into was in figuring out how to represent data points as audio intuitively. We uderstand that many people, including Chris are not able to discern small changes in notes without a lot of training or practice, so we could not rely on something like different piano keys to represent different values. We landed on using relative pitch, because it requires less training, and it makes outliers in the data very obvious.
We also ran into issues with grabbing user input from the keyboard in order to make the tool more interactive. We settled on a simple arrow forward option that Chris could use to navigate point by point linearly through the data, however there are many more functions that could be incorporated in future iterations of this project given more time.
Accomplishments that we're proud of
Chris does not think that the sounds are annoying, and he thinks that this is a good first step to an interesting tool.
What we learned
We all learned more about programming with audio and using keyboard inputs in Python code. Some members of the team used Python in this project for the first time, and they were introduced to key libraries like pandas and numpy. We also learned a little about audio signal processing.
What's next for Data non-visualization
We would love to incorporate more keyboard input so that Chris could step back and forth through the data, go to specific points in the data to listen to it. We would also like to incorporate multi-channel audio, which would probably require building on the audio mixing library that we used, or creating our own using PyAudio. Finally, we would like to have a set of diagnostic information printed for Chris such as the number of data points in the set, the mean and standard deviation of the data, and the location of outliers. Paired with the general audio trends, this might provide a more full description of the data.