By now, you must have played, or at least heard of, the game Among Us; few games rival its deception and ruthlessness. It's one of our favorite games, but there were a few features missing for us on mobile:
- Localized voice chat: nothing would beat the addition of perfect-cut screams when someone you trusted stabs you in electrical
- A live stream of each player's positions: it's always unfortunate when you have too many friends to fit inside of a single Among Us room. With a live view, everybody can join in on the fun!
What It Does
Our project has two components: the addition of localized voice chat and a live stream map of a game available to the public. The localized voice chat allows players in close proximity to talk to each other, with a dynamic algorithm to scale volumes as they get closer/farther. The live stream is a website that allows anyone to input a specific room code to watch, in real-time, where all players are on the full map.
You might think, "how much setup is required to set everything up?" Very little at all! You just need to run a single file. Our program handles the rest, including automatically connecting you to the voice channel for your specific Among Us game by using OCR to detect the room code.
How We Built It
We primarily used Python and React to build our project, and leveraged Discord's SDK to handle the intricacies of a complicated voice channel. Localized voice chat was accomplished by doing image analysis with OpenCV of the iPhone's screen captures, determining the players nearby, calculating distances, and scaling the local volume of the nearby players accordingly. The live player map required us to run image recognition to superimpose the iPhone's screen captures onto a full map, calculate the player's position on the full map, and send the relevant information to Google Firestore's Realtime Database for our server to easily access. Automatically placing players into voice channels based on their Among Us game ID was done using OCR with Tessaract.
Challenges We Ran Into
We ran into several challenges primarily revolving around image recognition. We initially planned to used OCR to find nearby players on-screen for the localized voice chat, but ran into issues trying to run it quickly enough on our personal computers while handling the image processing for the live-stream map. We ended using image recognition of Among Us characters on the screen, and mapped their body colors to Discord user IDs.
In addition, none of us had worked with image processing or analysis before this hackathon, so it was a bit of a struggle initially to understand its core features and how to best approach the functionality we needed; plenty of cv2.imshow() calls were used in the past 36 hours.
Accomplishments That We're Proud Of
Our biggest accomplishment was finishing everything we had set out to do within the hackathon's 36 hours. We were hoping to accomplish both of our features in the beginning but knew that we would have to cut one of them if things didn't go as planned. Fortunately, we managed to persist and finish with a project we are all proud of!
What We Learned
This was the first time working with text and image recognition for everyone, so that was super interesting. In addition, it was fascinating to learn about video streaming and optimizing our code to allow for multiple processes (two fairly large image-recognitions while updating appropriate models and databases) to run at the same time. It was also really cool to use a Python wrapper package for Discord's SDK; there was very little documentation, so it forced us to look at the official SDK documentation (written for C/C++/C#) and translate it to Python.
What's Next For Among Us Expansion
Looking forward, we hope to optimize our current processes and eventually run all of our code natively on the iPhone. We also hope to incorporate the actual images of the players moving (whether it be their legs swinging (or someone getting attacked) instead of circular representations of them.