‘Sound is the vocabulary of our world’; Acoustic feature is one of the physical properties of the material world.

What it does

It sonifies the physical world to allow those who cannot see the world to FEEL the world by leveraging spatial sound techniques

How we built it

  1. Scenes and objects description using Scene: {"Scene1: road, busy, urban}

Objects: [{"Image1": "car", "van"}, {"Image2": "houses"}, {"Image3": "people", "children"}]

  1. Eventually Image information retrieval {object, object attribute, size, distance, position}

  2. Sound source retrieval from open-source database Creative Commons, Universal Music Group API

  3. Map different sound sources using localisation information from the soundscape

Challenges we ran into

  • information type and reliability of the information we can get from computer vision
  • I/O communication
  • synchronisation across codes written in different operation systems
  • quality of sound source automatically retrieved online

Accomplishments that We're proud of

  • Combining Multiple Systems to help fix a real-world solution of people in need with visual impairment

What we learned

  • Interfacing between multiple systems to solve a problem
  • L-Isa Immersive Sonic PLayback Systems

What's next for Test

  • fuzzy word search
  • sound source selection

Built With

Share this project: