Inspiration

I am about to become a father, and while that is exciting, it also made me more aware of how unsafe everyday spaces can be for a baby. My wife and I currently live in a basement during a cold winter, and as we prepared the space for our newborn, I realized how many objects around us could become potential hazards. BabyNest came from imagining what it would be like to have a baby safety consultant walk through our home and point out what needs to be removed, secured, or purchased to make the space safer.

What it does

Imagine you bring in a safety baby consultant to your house and as you are walking through your living space, the consultant provides advice on what you need, what you need to remove and also writes down what kind of hazards you have and then eventually tells you to buy these items such as socket covers or stair gates

How we built it

This was built through a baby's view. When the baby is 0-3 months it can only see black and white so we wanted the user to recognize and see what the baby sees, when the user clicks age of baby as 0-3 months the home screen becomes black and white. As the baby gets older, then it can see more colors red, green and blue but not with full clarity so we used a transition from blue to red showcasing not full clarity and then 7-12 months it can see objects and depth perception and finally 1-2 years the toddler can be climbing up the stairs and opening doors and cupboards so built with that intention of being a helping hand through the baby's transformation

Challenges we ran into

There were many with the live scan and interactive walkthrough taking too long to respond and not identifying the right hazards. Additionally, using a lot of API Credits to iterate and improve the tool. Furthermore, continuously iterating as I learn more about what the baby sees and learns from its surroundings. Babies are smart so they will catch and on and learn quickly. This is not a fully computer vision api so it duplicates in recognizing the hazards and will exponentially increase. Google AI Studio kept redoing the entire code from scratch every time i prompted it so i had to double check on all the previous features as well to make sure they are all working. I did lose all of the work with one prompt but luckily i had downloaded the file before, i then decided to use the checkpoint feature to roll back.

Accomplishments that we're proud of

Using Google AI Studio I am able to prototype my idea and see it come to reality. The idea of adding hazards to a specific room and being able to check the current hazards and log all the hazards and have a chat interface and also link to the shopping cart. To manage all these multiple moving parts is fantastic to see. To see how it would look like, see it come to reality as my wife continues to support me and is really proud of this project. That is my real accomplishment - that my wife is happy and proud of the idea and work and imagination that was done.

What we learned

Learned that this is possible and not a very difficult task to do. Learned to use the different gemini models for function calling, live api, chat. Learned how to prompt and how to be as descriptive as possible and ensure the chat does not get lost with too much context given. The models made available from gemini allow to accomplish most ideas that I have and that I can continuously use it for my future ideas.

What's next for BabyNest

Definitely to work on fixing the multiple hazards being detected. A better UI, and include web search capability to find baby safe products and add it to the shopping cart. To definitely use it in the place I live with my in-laws to make it a better and safe space for the baby to thrive

Built With

Share this project:

Updates