When we decided to partake in Home Depot's challenge, we've brainstormed and dug deep into the average consumer's shopping experience. When something breaks down and a person wants to fix it, they can often run into the problem of not knowing which parts and/or services are needed to resolve the issue. This in turn makes describing the issue to an employee more difficult. We want to make this process painlessly simple, not only saving time, but elevating the standards of customer service.
What it does
Our app bridges the gap between the customer and Home Depot, supporting that prompt communication in sessions of idea generation and problem solving. While Home Depot already has solid search functionalities and intelligence to process simple requests, it is still learning to offer creative, complex suggestions. In trying to find a solution for a malfunctioning appliance, this customer can use the app to request an image and general symptom diagnosis and efficiently pick up the necessary replacement to save lead time. Additionally, if someone wants to tackle a project, like building a table, our trained app can detect the topic from a picture and suggest materials, processes, and services accordingly.
How we built it
On the front end, we used Android Studio to build Java classes and XML UI models for to make a multi-page application. We used Adobe Photoshop to create our custom-designed buttons to create a minimalistic feel. On the back end we have implemented several APIs including Azure’s computer vision, and function APIs, AWS’s API,____
Challenges we ran into
Initially, a challenge that we faced was attempting to get specific tags on images that were only trained by a general data set using Microsoft Azure. Since the API wasn't returning specific tags or descriptions of the image that we inputted, we decided that we could either train the Azure Vision API using our own dataset, or we could integrate Google Cloud Services Vision API. However, we learned that we could not use GCS Vision due to it not being compatible with Microsoft Azure function applications and we decided to continue using Microsoft Azure Vision as it has inherent compatibility with Microsoft Azure Functions and worked around its generalized output. On top of that, Firebase authentication with Google logins proved to be a challenge working with multiple computers and mobile devices, both real and virtual.
Accomplishments that we're proud of
Our accomplishments go hand in hand with the obstacles we have faced during our hacking experience. Being able to make the APIs work for us has definitely been an accomplishment for us, however that in itself isn’t the main reason why we see it as an accomplishment. Reason being that we did not simply implement the APIs into our code, but we’ve dove deep into the source code and studied over it in order to understand exactly what they do and how they do it.
What we learned
We have gotten a great opportunity to learn a wide range of topics from this hackathon experience, however we believe some of the topics we've learned more about was APIs and dependencies. In terms of technical project collaboration, our team learned the importance of verbal communication and being cautious with version control tools like Github. We really grasped the value of mentorship and taking the initiative to discuss pressing ideas and ask for aid from experts.
What's next for ProjectZero
We are looking to expand the applications of our project to go further from external improvement using imaging and collaboration. After getting a solid data set with proven accuracy and breadth, we’d like to cater our app into into other more ambitious areas such as interior designing, vehicle care, fashion, and makeup.