Inspiration

Virtual reality is a platform that can provide an immersive and interactive experience with virtual objects. VR controllers provide users with the sensation and capabilities to manipulate and control virtual objects. These experiences currently range from drawing with a paint brush in Tilt Brush, or firing arrows and defending your castle in Valve's virtual archery to name a few.

However, certain tasks and actions performed in VR can be extremely repetitive and tiring in VR. Traditional GUI interfaces using toolbars and menus for organizing actions are not only cumbersome to navigate for beginner users, but also not usable in many VR experiences. This hackathon project demonstrates the capabilities of using voice commands for controlling and initiating actions in VR, which can directly map the user's intent with the actions they want to perform in VR.

What it does

The player is surrounded in a generated universe with swirling galaxies of planets, space debris, and stars. Navigate through the universe and take a closer look at the galactic world by speaking voice commands to Amazon Alexa. Simple voice commands like "Engage thrusters", "Disengage thrusters", will propel the player through space debris. Tell Alexa to "Zoom in" or "Zoom out" to navigate with finer precision. And finally, call out "Select" and "Identify" commands to highlight and get closer inspection of planets and stars of interest, .

How I built it

Amazon Alexa/Echo's natural processing language parsed the verbal commands and are sent to Unity using a suite of Amazon Web Service tools including SQS messaging, Amazon Lambda Functions, and Cognito. An Amazon Alexa Unity client polls the SQS message queue and gets updated of incoming commands spoken by the player. This is then processed in Unity to manage the navigation of the player camera, and interaction between the players and virtual galactic objects.

Challenges I ran into

It was my first experience using Node.JS and Amazon Web Services, so initial set up and programming of Alexa's lambda functions were a challenge.

Accomplishments that I'm proud of

Created a platform and pipeline to issue voice commands to controls Unity. The added benefit of conducting NLP process using a separate hardware (Alexa) dedicated for semantic analysis, reduces resources for computing audio recording and language requirements on the virtual reality application side.

What I learned

Learned the basic workflow of setting up "verbal user interface" for Alexa and using the Amazon Web Service tools to create a data pipeline connected to Unity.

What's next for Intergalactic Exploration

Using the voice command system as a base for VR navigation, and manipulation of data for exploratory data analysis applications. Additional voice command features will "Filter by" for more advanced filtering and selection capabilities of visualized data or objects.

Share this project:
×

Updates