Inspiration: Our team is enthralled with the ubiquity of new technology, especially AI. Working hard doing research on new model for AI and training that on tensorflow, making the computer smarter with every iteration.
What it does: We have used convolution neural network to take image from our simulation as input along with human input during the training session, you can imagine that as a front facing camera on a real car and using that video to make a smart driver, through human training. We also used MYO to provide gesture control input just to have fun while we play the game.
How I built it: We use Udacity Car simulator training mode to collect the training data (images and human operations) and use tensorflow and keras in python to build up a graph of convolutional neural network. Train the network (updates weights and bias) with collected training data. Then we test how network performs on Udacity Car simulator autonomous mode. After several trials we found the best solution. Then we used MYO gesture sensor to provide input to the simulator for fun.
Challenges I ran into challenges with the throttle, steering for the car simulator. One big thing that we got through was actually really simple. We were providing steering input as degrees where as the simulator was expecting radians. It was a simple fix. Another challenge we ran into was with integrating MYO for the input with the simulation. MYO uses c++ for the SDK while we were writing our code in Python. We had to find open source python sdk for MYO to initiate the process.
Accomplishments that I'm proud of are many but to name a few, we worked with MYO which was something we wanted to do for a long time but weren't ready to spend $200 on. Thanks to MLH we got to work with that for free. Also training the AI was really fun because it involves a lot of "Playing" car games.
What I learned we learnt that the possibilities for the computer are endless and computer are meant to only get smarter.
What's next for AutoForever, since we din't have a drone to work with, we want to implement this AI model in drones so that we can provide emergency help to the places that are hit with tsunami or other natural disasters. For that all we have to do is JUST PLAY with the drones and they will learn with each iteration what we expect it to do. In simpler terms, we will do drill with drones, and unlike human it wont get bored of regular drills, only get better.