Machine learning from preprocessing to deployment takes technical knowledge. It could provide beneficial tools which could help with projects from the medical field all the way to simple tasks like facial detection. Those with fewer resources may find barriers to these tools that may help their small businesses or those in need around them. We wanted to develop an easy-to-use drag n' drop web app so that anyone can learn and develop their own machine learning algorithm - this would increase inclusivity when it comes to training ML.

What it does

Our project currently allows users to train image-based classifiers in a two-step approach:

  1. The first step is a web page that takes image data from the user-specified labels. The user selects multiple neural networks that they are interested in training (i.e. EfficientNet, MobileNet, ResNet, etc..). We distribute the selected models to DCP clients/workers and train in parallel. This parallel nature allows us to rapidly evalute models for their preliminary accuracy and time for inference. This preliminary step trains for 5-10 epochs.

  2. The second step is a second web page that allows the user to select their neural network based on the results from the first step. This allows them to make an informed decision on the potential trade-off between inference time and accuracy. Then, they can select their training hyperparameters that they want to evaluate that model for (i.e. learning rate, optimizer, batch size, etc). This is also done in parallel through DCP and trains properly with early stopping implemented.

The user will then have access to the best performing training hyperparameters associated with their selected model and the corresponding accuracy/inference time.

Future work includes a third step where we convert the model to a personalized user-friendly API or web app. This way, they will have a meaningful way to use their trained model.

How we built it

We developed the frontend using React, backend using Node, and connected them using ExpressJS and ZeroRPC. We utilized the DCP clients to perform parallel hyperparameter and model selection based on the user's specifications. The ML portion was done on tensorflowjs.

Challenges we ran into

DCP is a new technology, very interesting, and allows us to run parallel jobs. But being a new technology resulted in a lack of examples and online tutorials/resources to follow. We had some specifications that we relied heavily on those at DCP to help us work through. We were not familiar with Tensorflowjs, so that took a bit of time to pick up. Additionally, we were unable to publish our heavier models (Resnet152, Resnet50, VGG16, etc..). Lastly, the time-crunch of this hackathon got to us.. we ran into some time issues as this was only a 24-hour hackathon.

Accomplishments that we're proud of

DCP being a new technology made it inherently difficult to pick up - but we were able to work through the syntax quite easily. We are proud of our project idea and of our attempt at picking up tensorflowjs. We also have a functional locally hosted web app with the drag and drop & model/hyperparameter selection process complete!

What we learned

We learned to limit our scope slightly to have more time to polish our final product. We also learned how to use DCP and tensorflowjs.

What's next for Easy and Personalized ML through DCP

We hope to explore further the potential of turning this into a fully-fledged functional demo and product. There are a lot of steps that can be polished. Especially if DCP is compatible with Python/Tensorflow, we believe there are a lot of techniques we could easily use to polish our training method. We look forward to further exploring this project because we believe it has the potential to make ML more accessible.

Limitations and what we would have liked to finish

We had all the parts working and have published two models to the DCP server, but ran into some issues in the final couple of hours with the DCP server. So we were not able to demo our model training using the DCP server. We also had issues loading in large models (anything over ~30MB) and would love to get that working. Our current prototype runs on our local server due to the connection issues, but we see that this can easily be run on the DCP workers since we have all the implementation done. We also were not able to incorporate data augmentation and callbacks like early stopping due to not being familiar with tensorflowjs. We believe this project has great meaning and look forward to continuing on.

Built With

Share this project: