Inspiration

Cryptocurrency is a new asset class that is aiming to change the way finance works today. Bitcoin was the first cryptocurrency and was created as a peer to peer currency. However, with the recent rise in Bitcoin's prices and the nature of its blockchain network, the network fees for sending Bitcoin has skyrocketed and it has turned into a store of value, similar to gold. To solve the problem of high network fees, Litecoin was created. With low network fees, sending Litecoin is cheaper and faster than Bitcoin. Since it is faster and cheaper than Bitcoin, Litecoin is closer to a digital currency than a store of value. Ethereum is another cryptocurrency created on the premise of having programmable money. Ethereum introduced the concept of smart contracts which essentially created strict rules for how tokens and services can be provided over the Ethereum network. All three of these coins are traded daily so we made a machine learning model to predict the prices of the coins through the next 90 days.

What it does

Our website lets you pick a date for one of the three coins and it will display the prediction made by the Long Short Term Memory Recurrent Neural Network Model we trained for each cryptocurrency. If you would like to see the chart of the price since the coins listing on gemini, you can also do that. While it is not financial advice, it is still interesting to see what predictions the machine can learn from previous patterns humans may not be able to see.

How we built it

In a previous hackathon, we had designed a Ethereum Price Predictor strictly just on Google Colab. However, the model we used in that project was incorrect and there was no user interface. To improve, we only kept the data input and visualization from that project and wrote everything else during this hackathon. We used Python, Keras, and Google Colab to create and train the model using market data from the Gemini Exchange. Once the model was trained, we exported the predictions out as a CSV and stored it on github. Creating the site on Anvil, we used the Anvil API to send requests out to our Colab notebook. Each prediction method would go to the specific CSV for that coin and retrieve the prediction for the request date.

Challenges we ran into

One challenge we came across once we were able to train our model was finding a way to deploy the model onto a web app. We came across the Anvil platform, which none of us had used before, and had to learn how to connect this to our model on Colab so that we could set up a UI aspect to our project. A challenge that we came across with using this platform was we were unable to use it in a collaborative manner so we took a pair-programming approach to implementing the UI. Another challenge we came across was implementing the forecasting aspect of our model for specific instances specified by the user. Originally we approached this by retraining the model based on past trends prior to the date specified by the user, but we simplified our approach and instead stored all future predictions and instead of creating a new prediction with each user input, we just accessed the previously calculated predictions and returned that.

Accomplishments that we're proud of

We are proud of Nikitha because it was her very first hackathon! We are also proud of the final product which came out to be very organized and easy to use while performing powerful calculations in the background.

What we learned

In terms of the UI, this was all of our first time using the Anvil platform, so we definitely learned a lot on how to use the platform, especially how to use it in conjunction with Google Colab. While we had some experience with machine learning models, it was all of our first time deploying a model and using it essentially beyond research purposes.

What's next for To the Moon!

For this project we used preexisting data to create our machine learning models, but we recognize that market data is constantly changing and we think that this is something that could be incorporated into the future of To the Moon!. Instead of using fixed data, we could continuously train our model with a constant stream of data so that the model would be more accurately aligned with market data.

Built With

Share this project:

Updates