Inspiration

Ping Lau let us know about the need across the justice system for both accurate and easy access to audio data for translation and archival purposes. This will help increase the productivity across the court system as the time spent in court will be decreased regarding translation services. It also helps the clients in the court system as they are able to speak naturally instead of pausing repeatedly for translation services to be completed.

What it does

The iOS app automatically saves data to the device with each different speaker or speech session as a discrete file. The files are displayed in a list view. When the user is finished with transcribing or translating the speech sessions they can the erase the data from the device.

How we built it

We designed a framework that could extract discrete audio files without special equipment, currently any iOS 9 device. We also have a Java framework built for the app to run on laptop and desktop machines.

Challenges we ran into

  • Proper identification of blank audio spots, AKA what is quiet?
  • We also ran into some difficulty with the extraction of the data from the file when using the simulator.
  • Use of water bottle as nun-chuks.

Accomplishments that we're proud of

  • Hard work done as a team.

What we learned

  • Multiple discrepancies between different audio format preferences.
  • Noise is hard.
  • Sound channels have value.

What's next for UpdatedAlgorithm

  • Completing the Java desktop version for linux/windows/osx devices
  • Port logic to Android
Share this project:
×

Updates