✨ Inspiration
People spend days, months, or even years at a time in their rooms. However, one overlooked detail about these rooms is that they might be the cause of the commonplace anxiety and stress we experience daily. Unfortunately, due to us being super accustomed to these rooms, we often don't notice. According to the Ohio State University, the interior design and layout of our rooms can affect our mental health.
🚀 What it does
ZenDen uses Google’s Cloud Vision AI to analyze factors like dominant colors, light levels, objects, etc. This is then fed into an algorithm to output a score ranging from 0 to 100. Accompanied with recommendations, the user is given an analysis for aiding mental health.
🔧 How we built it
For the frontend, we used Flutter to create a cross-platform(IOS + Android) app. We wrote a middleware in Python (Flask) to interface with the Google Cloud Vision API, and handle scoring calculations and lighting analysis.
💥 Challenges we ran into
It was our first time using the Google Cloud Vision API. As we beginners in using AI in our apps, it was difficult to successfully incorporate it. We had issues getting the correct room colors from the API, and had to adjust some of the score weights to account for detection issues.
Another major challenge was using a local database as opposed to a cloud database like Firebase. Using json to store room data locally was a minor challenge for us. However, storing images proved a major challenge. Due to Flutter caching images to a temporary file, images we stored weren’t available. We had to circumvent this by moving the image to a more permanent location and retrieving the image from there.
In the last few hours of the hackathon, we noticed that the score calculations were very off. Images that we judged to be perfect were getting detected as low as 35%. We were able to fix this by utilizing the image labels as well as object detection from the Google API.
🌌 Accomplishments that we're proud of
We are proud of us slowly learning how to leverage the huge power of AI into our hackathon projects. Our last hackathon was also based on AI, and we plan to learn more about AI. What started out as seeming very ambitious for us ended up working very well.
🎓 What we learned
We learned how to use various Google Cloud resources in a hackathon project, and gained experience in writing an API in python and interfacing with it using Flutter.
🔮 What's next for ZenDen
As with all projects using AI, the AI model itself can be more fine-tuned and better trained. Google offers the option to train the object detection, which would definitely help calculating more accurate scores. We could also further improve the UI by adding page animations and displaying more data about the specifics of the calculated score in a user-friendly way.
Getting the correct color of the room is still posing problems, as the dominant color from Google’s API sometimes returns the color of major foreground (a TV, darker bed covers, etc) elements instead of the wall color. This could be improved by creating an algorithm for finding the color and lighting on our own, excluding areas of the image that were detected as another object.
We also generalized many factors due to time constraints. With ample time, tools like properly executed edge detection and better room layout detection, it could be coded in for more accurate measurements.
Log in or sign up for Devpost to join the conversation.