We saw a problem that we wanted to solve, so we created something that anyone could use. People who are no longer able to do what they used to be able to sometimes have remorse or wish they had gone to certain places. ROAM allows you to do just that.
What it does
Our project give the user a chance to utilize Google Cardboard without ever having the annoyance of removing their phone to change what their virtual environment. This is accomplished through speech recognition, where the user can simply say where they want to go next and ROAM will take them. Once at the destination ROAM gives users facts about the different places that they are visiting and a bitmoji friend to help guide them.
How we built it
ROAM was built using a-frames/web VR for the virtual rendering of the environments and api's for speech recognition, fact generation, and bitmoji additions.
Challenges we ran into
Rome wasn't built in a day but ROAM was. We ended up scrapping a bunch of different api's and changing the interface of the project up until about 6 pm on Saturday before finally landing on using a-frames. We also did not fully understand the snapchat api when we first began to use it. The initial problem was actually getting the login-api to redirect properly and then the later problem was realizing that we could not have every user sign in like we wanted to.
Accomplishments that we're proud of
-Accessing the bitmoji api -Getting 2d objects to render in VR -Getting accurate voice commands
What we learned
What's next for ROAM
Adding more places to visit and more facts to learn! Also we would like to connect this with a better speech recognition library e.g. Google Assistant to further streamline the user experience. We would also like to potentially have the text read out the users.