Inspiration

As software engineers, we have all known what it's like to start from square one in our learning, especially when it comes to newer technologies constantly being developed around us like artificial intelligence. With concepts such as neural networks and machine learning becoming increasingly intimidating to tackle in modern day, we sought to take inspiration from simplistic user applications like Scratch that helped us learn to code with ordinary drag and drop mechanics! Optml provide a friendly, easy to use interface to build neural networks while getting introduced to the statistical concepts that make this technology possible.

What it does

Optml allows the user to graphically design and tune their own machine learning model, learning what important terms mean, and more importantly, what they do in the process! The user can drag and drop to connect different types of nodes together to represent the different types of layers that can be utilized in a sequential model. Ranging from a simple perceptron, to AlexNet, to 3D convolutions, Optml already provides a wide range of network types to build. Once the user is satisfied with their model design, they can feed their training data and observe the results. Metrics will appear to document the accuracy and loss of the model as it undergoes training for a certain number of epochs. Furthermore, they can alter and perfect their layers and hyperparameters to get the best performing model they can.

For those trying to develop a use case, a download button is available to download the full h5 model with the trained weights. .h5 files can be easily used in keras based machine learning frameworks like tensorflow or huggingface, but can also be imported into other frameworks.

Challenges we ran into

Another issue we ran into was the main thread in the backend was being clogged by the training process of large datasets. Our metrics logger was originally asynchronous, but we were training on CPU at the moment and it would use 100% CPU utilization. This would cause the server(a laptop)'s scheduler to shove the metrics updating thread in the back, not updating until the model was completely done training. On a real server, or if we trained on a GPU this shouldn't have been an issue but in the meantime the metrics are logged synchronously.

Accomplishments that we're proud of

Our graph that details the training data updates in real time as the model undergoes training. The UI's smoooth experience that works seamlessly with the processsing carried out in the backend is something we're incredibly proud of, as it gives us clear distinctions and differences on how changing the model can affect the end result of thet trained data.

What we learned

Even though we were aware that different model architectures could have varying results, we were truly surprised at the variance in results we obtained in evaluating the training process of the model. Seemingly negligible changes to the data set size and neurons in a layer subjected us to wildly different ETAs from hours to milliseconds!

What's next for optml

optiml is not yet capable of fully utilizing the powerful keras api, but we want to truly expand our reach to more capabilities. Visualization is a key tool in learning and we want to provide as much data to users as possible and allow them to experiment further!

Built With

Share this project:

Updates