I got my inspiration from the research paper “From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm? By Maria Santos and Magnus Egerstedt”. The research paper talks about moving a swarm of robots in some particular trajectories for expressing various kinds of emotion. The trajectories for each of the emotions are predefined. Something like this can be used to describe emotions to deaf people. Audio can be easily converted to text which can be converted to visual response using this project.
What it does
It takes in text input, analyzes emotion conveyed by the text and then displays it using a pattern shown by drones. I tried extending the idea in the research paper a bit by adding emotion prediction from the text inputted and then displaying simulation corresponding to that emotion. A linear SVM algorithm is used for training the model. This algorithm is efficient for multi-class labelling over large datasets. Three JS is used for demonstrating the robot movements. The blender model of a drone is used as the moving object and such 12 drones move in particular trajectories for each emotion. The emotions that have been included are Happiness, Sadness, Anger and Surprise.
Initially, I was facing difficulty loading the blender model for the drone. And I also faced some difficulties while connecting the text to emotion and emotion to the motion of the models
- Adding audio to text converter
- Increasing the emotions displayed