We all have seen individuals in public places covering their ears, avoiding situations, throwing fits, etc. Typical individuals have a difficult time understanding why someone is acting like this, and often avoid them, judge the parents, etc. However, giving them a taste of what it is like in their minds can help them empathize and, then, interact well with them. In the end, those with the special needs just want to be loved, and I hope this skill does a little bit to make the world a more understanding and friendly place.
What it does
It delivers a rich media experience to simulate what a person with hypersensitivity to audio hears in their head. First, it emulates being in a restaurant as a typical person, then it demonstrates what that same experience is like for someone with hypersensitivity to audio by playing the same sounds and narration, but at a level and mix they hear.
How I built it I build it starting with an Alexa demo code in GitHub and recorded and mixed custom audio.
Challenges I ran into
The two main challenge I ran into was accessing the hosted audio file and using the mixdown syntax correctly. I would have like to be able to use more than 4 tracks at a time in the APL language, so, consequently, I had to do some of the mixing within the audio file itself.
Accomplishments that I'm proud of
I am proud of the fact it is a new way to use and experience Alexa.
What I learned
I learned how to use APL for Audio to provider a rich experience.
What's next for Hypersensitivity to Audio Simulation
- Additional scenarios to choose from. Right now it is a single scenario, a restaurant, but I would like to have a user select from several.
- Stereo playback on stereo pairs and Alexa headphone based devices, like a Kindle Fire or Fire TV. This will give the audio experience greater depth and positioning of sounds.