This project is inspired by the dedication of hardworking, sleep-deprived parents across the world.

The IBMS uses a deep-learning, convolusional neural network to learn, recognize, and distinguish sounds of infants in distress. The system will subsequently play soothing music to ease the infant back to a peaceful slumber.

To construct the artificial intelligence that drives the system, numerous audio samples of infants crying were acquired. This is contrasted by many other audio samples of ambient noise, ambient conversation, and silence. Using this vast library of samples, we trained the artificial intelligence using TensorFlow. Through TensorFlow, a model is generated which is trained through numerous iterations where the learning rate improves. Upon completion of training iterations, TensorFlow produced a working, trained model that can effectively distinguish between the 4 classifications of audio samples including a 5th - unknown.

Throughout the project, we navigated many challenges, both hardware and software. Initially, the dual boot Linux partition on a computer was not functioning properly, which was resolved with a re-partition and fresh install of Ubuntu. There were many installation errors which were a result of many dependencies and updates which had to be acquired. Issues with the neural net were also present, mainly regarding overfitting of data - where a model becomes too well fit to a limited data pool, ultimately decreasing its effectiveness at classification of new data points. And finally, the largest challenge of all was the lack of time, as it took significant computing resources and time to properly train the neural net.

As a team, we feel that we had good synergy and skills that were complementary to each other. In addition, successfully training the neural net and possessing a working prototype at the end of the hackathon is a great source of satisfaction for our team. Collectively, we are proud to be working with cutting-edge technologies, pushing innovation, and looking to the future.

Throughout the 24 hours of Hackpoly, our team learned various skills, including manipulation of audio files/samples and training a neural net.

Considering, at the end of the day, the project is only currently a prototype, there are many more features and development stages that can be implemented. Primary of which is to migrate the AI to a stand-alone embedded system, namely NVidia's Jetson TX2. In addition, the following features were considered for implementation: - Secondary response in the case of continued infant distress (essentially notifying parents or guardian) - Video and motion tracking of the infant in the unlikely event that the infant exits the crib in any way.

Built With

Share this project:

Updates