One of the people on our team has a friend who is blind and he was told that many of the problems that people with regular sight encounter every day, that just seem like everyday life, can end up being very difficult for people who are blind. After this encounter, he went on to do more research and contacted organizations for the blind as well as people who are blind and he found out that one thing most people who are blind struggle with is matching socks, and in general, having a style in clothing. Our app was a basis for what is possible and yet to come to help with this issue.
What it does
Our app helps people who are blind or visually impaired match socks so that they don't wear two different color socks.
How we built it
We started out by delegating tasks ie. front end dev, back end dev, and ML specialist. After this, we went off to do our specific tasks and confer every once in a while when something came up.
Challenges we ran into
We had many challenges starting with where do we get lots of pictures of socks for the training data and also what neural network do we use. The most major challenge was the neural network because we started out with using a linear Neural Net and this wasn't sufficient so we had to completely redo all our code and learn from scratch about CNN's which are not only more challenging to code but also more complex from a theory point of view. We also looked into using Faster RCNNs, but we ended up just using a pre-trained VGG16 conv-net model, which we further trained using an Nvidia Tesla V100 on compute engine. Our model and training data/code can all be found on GitHub. We also tried to use AutoML Object Detection for part of our project, but the accuracy ended up being too low for what we needed. This is likely due to our limited dataset, but there may be other factors as well.
Accomplishments that we're proud of
We are all really proud of the amount of new learning that we had in this hackathon because we decided to all take on something that we had never done before, so it was extremely ambitious but it all paid off in the end.
What we learned
We learned many new skills throughout this hackathon. From the front-end perspective, 2/3 of us had never used react-native before From the back end perspective, we had all never used PyTorch and we had never created or even understood what a CNN is. Now, not only did we create a functioning one, but we also integrated it with a Flask server so that it can communicate with the front-end react native app.
What's next for Sock Match
What we plan on doing next is adding a full style genre picker. One thing that we got inspired by is the facebook picture to certain genre converter ie. takes a picture and makes it look like a cartoon. We thought that we could do the same except instead of making the picture look like a cartoon we would have the cloths match a certain style.