Move 'n Groove
In the age of virtual work and learning, many social situations have changed due to the mediums we choose to interract over. Often times in meetings and social events, groups of people will split up and have conversations in smaller groups. With conventional audio/video chatting applications, only one person can speak at a time without interruption. Our idea was to create a 2D space where users can move there avatar near other users to simulate a regular social situation, where you can best hear those nearest to you.
How we built it
To build the project, we first set up a server and client in python using PodSixNet - a lightweight multiuser networking library. We then created a user interface with Pygame, by creating avatars for the users, allowing for the users to move their avatar with arrow keys, displaying the user nicknames, and setting an audio file. As more users join the server, each other existing user is notified of the new users presence and they will see the new users position on the screen in real time. Each user will be able to hear the other use’s music as they move their avatars around, and the volume of each will be proportional to the proximity of the two users. To do this, we calculated the distance between two avatars, and normalized this value, such that the closer two users are, the louder the volume will be. If two users are far enough apart, then they will now hear each other’s audio at all. This works for as many users as there are in the arena, and you will hear each others audio simultaneously directly according to your current distance to each.
What we learned
Since we used a multiplayer framework, we learned about the UDP protocol. We also learned how to collaborate well under pressure in a virtual environment.
The biggest challenge we faced was allowing for audio input from each user. Our choice of tools limited us to only playing audio files which had been saved, so it would be difficult to allow for a constant stream of audio input. In order to get around this, we instead chose to have each user play a music file, which would simulate their audio input talking, as we were unable to implement an audio stream in real time given the short time constraints of the event