Our biggest inspiration for this project was to find a way to express or encapsulate almost “typical” human emotions visually through immersive art. There are so many different ways to express emotions, but our main focus was to work on a high-powered hardware project combining hardware (Arduino) and software (computer vision). We wanted to use high power image processing for this project, so we utilized the machine learning capabilities of the Arduino Portenta H7 as our camera input. While we had something that can see the world literally, we felt the need to re-express the world through a new artistic vision — in this project we chose flashing lights and color. Additionally, we became friends 3 years ago through our love of combining computing and creativity, it was only appropriate to get back to our roots of connection!
What it does
This project utilizes the camera on board the Arduino Portenta H7 to recognize emotion on an individual's face, then process that emotion to display it as a color on an artistic light matrix display controlled separately by an Arduino Uno.
How we built it
First, we found a dataset of thousands of images of faces tagged under 6 emotions: happy, sad, angry, surprised, neutral, and fearful. This dataset was loaded into edge impulse, which allowed us to create a transfer learning based image classifier model to classify an image as belonging to one of these categories. This model was then optimized for the Arduino Portenta H7 + Vision shield and loaded onto the board. A script was then written which would give the most likely class for each of the emotions listed and print the id of that class to the serial console.
After this script was written, we connected the Portenta to a computer which was also connected to an Arduino Uno running a neopixel light matrix. On this computer, we read in serial messages from the Portenta and re-sent them to the Uno, transferring the vision data to the Uno.
Finally, we programmed the Arduino to interpolate between colors based on the emotion data received, stabilizing to a different color for each emotion based on color psychology and the connection between certain colors and their emotional connections.
Challenges we ran into
Our goal was to decrease the training time for our neural networks and datasets provided. The time it took to download the datasets was lengthy, and there was an uneven distribution of images per emotion category. For example, there were only 3,000 images for "sad," whereas "happy" had over 7,000 images. We were concerned about achieving higher accuracy in training our data, as we were only achieving a maximum of 40% accuracy. We searched online for the highest emotional accuracy and found 50%, so we left the model at 40% since, in the time frame given, it didn't seem like we would be able to optimize the model very much, especially with it running on a microcontroller.
Figuring out how to go from the Portenta H7 -> Uno -> Serial -> LED/Neopixel Matrix was also difficult to navigate at first, but eventually it was figured out through trial and error. A major issue we ran into was the Portenta not connecting to the serial port, which was solved by re-flashing the firmware and uploading the code file to the board differently. Another minor issue we had was that we were constrained by time when we used the 3D printer to make our Arduino cover, and we were unable to use power tools to clean it up, which resulted in it not meeting our aesthetic expectations.
Accomplishments that we're proud of
We are proud of figuring out how to incorporate art and computing into a project — a long term goal of ours is being able to combine software and hardware together to make an art piece that is personal to the both of us. Additionally, we are super proud that we were able to successfully use the Portenta H7 for the first time, as it is a new board to both of us.
What we learned
The biggest thing we learned was probably how to use the Arduino Portenta. Before this hackathon, neither of us had ever seen a Portenta. Now, we have learned how to create a machine learning model for the Portenta in edge impulse, connect to OpenMV, interface from the Portenta to a computer, and from the computer to the LED grid. We have also learned a lot about neopixel interfacing with Arduino, facial recognition/detection algorithms, and serial communication.
What's next for Wavelength
We are hoping to expand and classify more emotions in the future — six human emotions is not enough to encapsulate how humans express themselves to one another. Combining different design patterns with more colors onto the LED Pixel Matrix is on the radar as well. Additionally, it is a hope to train our dataset more accurately so it beats more than 40% accuracy, although most emotion detection models have relatively low accuracy (at most 55-60%).
Log in or sign up for Devpost to join the conversation.