Artificial Intelligence Virtual Really Experience

The Project

A.I.V.R.E allows a human to visualize, interact and experience with the world as an Artificial Intelligence - a self driving car. You will literally see what the car sees. Which is radically different to what a human sees.

I believe this is the first attempt to allow a human to exist in VR as an A.I.

Self Driving Cars are on the verge of being available to the public. They are at the forefront of A.I. technology. Now you get to be one.

Being an A.I.

Life as an A.I. is not so easy: Every time the vehicle moves or senses the environment, there is uncertainty. Weather, lighting, slippery roads, noisy sensors, and conflicting data all combine for a complicated cumulative existence. An A.I. knows that there is no such thing as certainty, only probabilities. There is no spoon...


This is not an easy project, the most difficult will be visualizing the laser tracking system with particles. It generates millions of data points a second. This data is merged will all other sensor data to create a representation of reality.

The other challenge is representing A.I. knowledge in a way that a human can understand and interact with. It will require novel user interface design.

Who am I?

I just finished A.I. Robotics at GeorgiaTech by Dr. Thrun - where we studied the six key algorithms required to enable self driving cars. I intend to visualize all six if possible. I am pursing a Masters in A.I.

I am doing this project to show the world what an A.I. really is.

Should be fun! - Geoff

Update #1 : Second MileStone

The Project A.I.V.R.E will be pushing the limits of the Note 4. Above all we must maintain FPS.

Note 4

The great thing about the Gear VR's mobile device, the Samsung Note 4 is that it is effectively an XBOX 360. You get a similar GFLOP performance but it only uses 4 watts. It also comes with a significant video memory for textures. Which means the simulation will look amazing, even with a low poly count. The low poly count is key to maintaining the frame rate.

Unity 5

Unity 5 provides multithreading of the physics. This is critical as large sets particles are going be needed to simulate the sensor systems of the AI. We need Unity 5.

One of the unforeseen complications is Unity is in a state of transition. Even through Unity 5 is released and is the recommended tool for the project, many of the 3rd party libraries have not been upgraded. Upgrading is an automated process and its not always working. :)

I am testing various libraries with Unity 5, but the key ones in this project are: NGUI, NGUI Maps, Particle Playground, AStar, Playmaker, Colourful, Graphing and Eddy's Car Physics. The car itself will likely be taken from standard assets.

AI Robotics

The key AI algorithms are written in Python and use optimized libraries like numpy. Fortunately Unity offers Boo, a python variant. They are:

Monte Carlo, Kalman Filters, Particle Systems, aStar & Dynamic Programming, PID, SLAM

Kalman Filters are use to track moving objects around the AI's vehicle. Not can an AI see everything around itself (using measurements to get positional data) but it also knows it's velocity by ONLY knowing positional data. This is the trick, it can derive another vehicle's speed and likely future paths.

Imagine a kid running out on the street being a parked car, Kalman filters allow the AI to see the child's legs and predict the action. Today's self driving cars can update 70 times / second.

A Particle Filter is used by the AI to find out where it is on a map. Imagine your lost in a maze, but you a have map of the maze, you just don't know where you are in the maze. The way an AI solves this problems is it makes thousands of virtual copies of itself. Each copy looks at its surroundings and checks it against the map. Then they vote and the answer is in probabilities.. In A.I.V.R.E I will illustrate this with thousands of virtual cars - particles - trying to figure out where they are.

In the image for this week the arrows are a type of path planning. The graph on the left shows bother a correlation and gaussian distribution. As the AI's confidence in accuracy of something increases, the gaussian will become tall and thin. If its wide or shallow, that means its guessing. These graphs will update in real time and they change as you look at different active items with the VR.

Until next week - Cheers Geoff

Update #2 : Third MileStone

The semester is done and marks are coming in: Class AI Robotics - 98%, Machine Learning - Markov Decision Process project - 100% :)

Now I can finally focus on the VRJam, its due in 10 days, its going to be tight!

Gear VR

Further testing of the Gear VR puts the upper limit for rendering a scene at about 150K polygons. I uploaded half a dozen projects into the Gear for performance analysis - and great news, the high res textures work awesome. The wireless debugging is working. Unfortunately when it crashes its not so easy to find out why.

Unity 5

The difference in third party libraries from V4 to V5 is still causing issues and using up time. Every problem is solvable, its just a function of how much time it will take. i.e. fixing the video capture command because it had the wrong file permissions to run :)

Music Score

I have whittled down the musical score for the experience between Moonlight Sonata and Mozart's Requiem.

Key Algorithms

I have uploaded a video trailer and included more screen captures. The other key algorithms to be added:

AStar is how most computers find their way from A to B. You use it every time you ask a Google Maps for directions. Its the most efficient way for a computer to find its way through a maze given a direction. Its been around for years, and it works really well.

Dynamic Programming is a way that computers determine, in advance, the best choice to make at each step in a grid word or maze. Its like solving for every move in chess or checkers in advance. Dynamic Programming is also known as a Markov Decision Process (MDP).

S.L.A.M. Simultaneous Localization and Mapping is how a robot explores and learns a maze. Lets you wanted to explore an old coal mine? A robot would use SLAM to learn the mine and map it out. It does by comparing its position against know landmarks (i.e. the front door).

The other two algorithms I will not directly demonstrate in this release. PID is used to smooth our A*Star paths and Monte Carlo helps a robot find where thing are, just like Kalman Filters and Particle Systems. They are somewhat redundant and time it tight.

Video Trailer

I created I pretty awesome video trailer showing what I am trying to build. It was created with iMovie. Enjoy!

Now Sprint to the Finish. - Cheers - Geoff

Built With

Share this project: