Inspiration

-Neural nets are often retrained, but seldom altered in their construction (such as activation functions, loss function, number of hidden layers, etc.) as it can take large amounts of time to so by hand.

-It is very easy to overfit or underfit neural nets, and their maintenance can take large amounts of time

-This automatic approach allows for a "deploy and forget" type of development model

What it does

It uses an evolutionary algorithm as a backbone with gradient descent for smaller optimization tasks to evolve neural nets based on how well they are able to predict future data (using RMSE)

It spawns different versions of neural nets on separate threads that compete

Challenges we ran into

We were unable to finish debugging the project to test some aspects of it, although the code is there for everything we planned

Accomplishments that we're proud of

We were able to make a framework that after debugged and polished can be used to generate AI for many types of time series data with multivariate inputs

This project taught us a lot about LSTM neural nets and AI optimization, it was also our first gradient descent and evolutionary algo

We are going to finish debugging the code and then keep adding methods of gaining entropy to the neural net setups to make sure that we don't get stuck in a local minima easily

Built With

Share this project:
×

Updates