graphic analysis of tone of audio recording
Page where journal entries would be saved
Page where user uploads the audio recording/inputs the phrase their autistic child uses
We, Andres and Sunjana, both have siblings with autism, and we know that children with autism want to communicate. The problem is that a unique, proprietary aspect of their communication is difficult for parents and other caretakers to understand. Echolalia is a verbal behavior in which a child repeats chunks of language that they have heard from other places, oftentimes from pop culture and everyday environments. These utterances often seem nonsensical, but we know through personal experience that the child’s choice and use of phrase is always for a certain intent, either for pleasure or to convey meaning.
What it does
Using the latest tech to pick up and contextualize speech, we hope to bring these chunks of language to life. Our prototype, a journal for parents to log and analyze instances of echolalia, models an ideal tool to facilitate comprehension.
How I built it
We used HTML and CSS for the initial website design. React , Python, and Django were used to run the tone analysis API. We drew the outline for the website with Figma. Experimented with React/Vokaturi for analysis.
Challenges we ran into
The most difficult task was converting the API, written solely in python, into something which could be used by our HTML file. To solve this problem, we downloaded the entire sdk of the API and fitted the python file with specific references to our sources. We also linked the API to the web app using Django. Another issue was designing a logical front-end for the user. Our less experienced coders took the opportunity to learn HTML and create something useful for the target community.
Accomplishments that I'm proud of
Andres: Developing and pitching an idea to several people and generating passion for it.
Sunjana: Discovering an appropriate API for our project, and being able to use my experiences with my brother to enrich the purpose of the site
Albi: Learning and using HTML to create a front end.
Muhammed: Implementing API/Django
Noor: Working in a team of entirely new people. First time using Git effectively.
What I learned
Andres: How to work together given our distinct skillsets and strengths and weaknesses. Slack, Github, Figma.
Albi: HTML, CSS, Bootstrap, Github, Autism, Echolalia, what an API is.
Noor: A lot about autism and echolalia, Bootstrap, using the Git terminal
Mohammed: React JS
What's next for Echo Journal
Given that children with autism often draw their echolalic speech from specific source material, we would like to implement an API that determines where exactly an utterance came from. For example, Andres’ brother Diego is obsessed with Disney. His echolalic utterances often come Disney movies and TV shows, even though we often don’t notice it. If a parent can understand where the context from where a child is finding language, they will be better able at piecing together what their child is trying to say.
We would also like to implement an API that calculates phrase occurrence frequency in a recording, that way EchoJournal can be reasonable for longer recording times and therefore everyday use. We would like integrate technology that can analyze speech from children with verbal difficulties in order to produce an accurate tone reading. Lastly, implementing a database that saves analyzed recordings with their corresponding emotion on a page would help parents understand echolalia over time.