Why Nima was created Nima was inspired by the rising trend of "friendly" technology. We wanted to create something beautiful that combines the usefulness of a voice assistant with a personal response system to create a better user experience. Thus, Nima was designed to be an autonomous voice assistant robot that is able to recognize user emotions and react to them accordingly. She will smile back at you to let you know that she sees you and respond to any request you make.
How she was made
Nima's body was made with cardboard and plastic, but in the future, we plan to 3D print it. It was challenging to configure the hardware parts to be in the right spot and hold steady on the box because of the way that they are made, but with lots of tape and determination, we were able to do it. She moves with the help of an Arduino that controls wheels on motors and her hand through a servo so she can say hello.
We originally planned to use a screen with a raspberry pi and a camera but faced problems with configuring these items with the necessary tools for computer vision. The team decided to instead create an app that runs the machine learning and use the smartphone for the screen and camera capabilities.
We learned that using the speaker and microphone for user-input is more challenging than we thought since we lacked the parts for it. We created the body and interface but were not able to complete the voice assistant component nor use computer vision because of limited storage space. These are critical components to our vision so we will continue working on those aspects, but for now, we have a working robot with a user interface.
The first next step will be to complete the integration of the machine learning so that Nima can meet the interactivity level we hope for. Once that is done, we need to figure out how to efficiently integrate voice input and recognition for the assistant component of the project. Lastly, it is important to reinforce the mechanics of Nima and this can be done by creating a more solid body.