Inspiration
Use-cases for large audio/video calls (e.g. classroom, job fair, exhibition) currently use breakout rooms to manage smaller conversations. This doesn’t best emulate the physical experience as you lose context of the larger event.
What it does
Multiple users can join a virtual space. Users can navigate the space using w,a,s,d keys. Navigating around the space alters the audio volume based on the users proximity to others. By moving to specific parts of the room, users can conduct smaller more focused conversations without completely being cut off from the main event.
How we built it
The project is a web application. The frontend is a react application styled with bootstrap with canvas being used for the map. The backend is an express application. The apps communicate via different protocols depending on the data:
- The map data is communicated via websockets.
- The video and audio feed uses webRTC peer to peer, using websockets as a signalling server. The prototype is deployed on Heroku with CI from Github.
Challenges we ran into
Our team had no webRTC experience so it took a considerable amount of time for us understand its implementation. Careless bugs could have been avoided in the code if we used typescript instead of javascript. Our team also spans two different time zones and it was a challenge to work remotely whilst making sure everyone was in sync.
Accomplishments that we're proud of
The prototype completed achieved the features that we set out to do and wanted to test. It started a good conversation about what following roadmap of the project could include.
What we learned
- WebRTC
- Canvas
- Web Sockets
What's next for soundscape
The current implementation uses peer to peer webRTC. Moving forward, we'd like to redesign the architecture to consider how the application could be more scaleable.
Log in or sign up for Devpost to join the conversation.