Inspiration

We are team AutoAED from the Colorado School of Mines. More than 350,000 people in the United States suffer a cardiac arrest each year. Of these 350,000 incidents many are unnecessarily fatal. If bystanders are able to react quickly by use of an automated external defibrillator (AED), the survival rate of the victims can more than double. However, an AED is only used in 19 percent of all cases. This is due to the inability of bystanders to quickly find an AED and bring it to the victim. Taking into account the shock and stress of this situation, many bystanders forget to use an AED at all. There is commonly one of these devices per floor of large public buildings. In large building such as malls and convention centers, these devices can prove extremely difficult to find especially in an emergency. To build on this problem further, these large centers often have extremely large amounts of people, increasing the odds that one of these individuals suffers a cardiac incident. Put these together and you have a perfect storm for worst case scenario outcomes when it comes to heart attacks in public places.

Our team found this problem unacceptable. We were inspired to create a solution that would dramatically decrease the time it takes to get an AED to someone in need hopefully resulting in less unnecessary deaths from cardiac arrest.

What it does

Our solution to this problem consists of two sub-systems which combine to rapidly deploy a AED to a victim suffering cardiac arrest: 1) A camera utilizing neural network machine learning for detecting an individual in cardiac arrest 2) A fully autonomous AED delivery robot capable of navigating a known floor plan and identifying the individual in need.

Neural network camera - Our solution uses both computer vision and a machine-learning based algorithm to detect a victim undergoing cardiac arrest. The livestream takes a camera feed, converts to grey scale picture frames, and inputs the frames into the machine-learning algorithm to output either, "all clear", or "cardiac arrest detected". Our custom trained convolutional neural network is capable of identifying someone suffering a debilitating episode of cardiac arrest even when several other individuals are in frame.

Autonomous AED robot - Once cardiac arrest is detected, an AED robot is deployed. Via network connection, a raspberry pi receives a deploy signal from the neural network camera. Using a predetermined path, the robot can navigate to the room containing the victim while simultaneously conducting obstacle avoidance using ultrasonic sensing. Once arrived, the robot then uses a camera with computer vision to create a bounding box around the victim. Using this bounding box, the robot then can then locate the victim and efficiently deliver the AED by recording the victim's position in relation to the robot.

How we built it

To build the machine learning algorithm for the neural network camera, a training library was built. This was done by repeatedly recording a person behaving normally in frame. This person would then fall to the ground in a similar manner to that of someone suffering cardiac arrest. Portions of these videos were then labeled "all clear" and "AED needed". Each frame of these videos was converted to individual pictures and fed into a google-cloud TensorFlow convolutional neural network to build the model responsible for recognizing heart attacks. This neural network uses 10 layers with an input layer of 64x114 pixels and an output layer of 2 nodes to determine if there is a heart attack. We used a convolutional nueral network because of their local spatial coherence that makes them effective in image classification. The prototype robot used to deliver the AED was built using a raspberry pi and various sensors. These sensors included break beam infrared encoders for general localization, an ultrasonic sensor for obstacle avoidance, and a camera for computer vision and victim localization.

Challenges we ran into

It was extremely challenging to fine tune the training data inputs into the TensorFlow algorithm. In order to train the system, we had to use a trial and error approach. Many types of videos were used, utilizing different people, camera angles, and distances. Various libraries were created and used to train our neural network. Each of these models was evaluated based on its accuracy leading us to hone in on a strategy that performs well for our specific application. Localization was a huge challenge for the AED robot. Our team initially set out to localize through use Bluetooth signal strength measuring with the thought that Bluetooth triangulation would allow sufficiently accurate results. Alongside this we planned to use a gyroscope to measure orientation. Each of these attempts proved unsuccessful leading us to pivot and pursue a different localization method. Makeshift encoders were used to keep track of distances traveled and computer vision was used to orient the robot in the correct direction. Lastly, implementing computer vision in order to steer toward the victim proved to be very difficult. We were able to implement object locating on a larger machine, as well as create a controller that would be used to keep that object in frame. Difficulty came when we tried to move these processes onto the raspberry-pi. After several hours we decided to outsource the image processing to another computer. At this point we found it extremely difficult to send images wireless over network sockets. We were unable to implement this successfully before the end of the competition but were able to demonstrate a proof of concept on a more powerful computer (which is included in our final video). We also anticipate that with slightly improved hardware aboard the AED robot, we would be able to implement all of these capabilities.

Accomplishments that we're proud of

We are extremely proud of our creation of an effective neural network. After many attempts and several hours creating valid data sets we have created a very effective computer vision algorithm capable of deploying our delivery robot. Secondly, we are very proud of are ability to come up with an alternative robot localization method. Team moral was low as we struggled to figure out how to create our autonomous robot. By staying motivated and thinking creatively we were able to accomplish what we set out to do with the resources we had available during the hackathon.

What we learned

Our team has learned an exceptional amount in this limited time frame. Creating effective neural networks stands out as we not only learned how to create one, but also how they tend to behave and react to various training inputs. We are also excited to have explored the world of computer vision and become familiar with OpenCV. This knowledge will open tons of doors for us and lead to some very exciting projects in the future.

What's next for Fully Autonomous AED Emergency Response System

Moving forward, our team would like to continue to train our machine learning algorithm to recognize cardiac arrests in more situations. By supplying larger data sets to the convolutional neural network, the algorithm will become better at recognizing individuals who need an AED in a variety of environments. Additionally, we would like to improve the drivetrain on the autonomous vehicle by upgrading the encoders and improving the computer vision and obstacle detection of the vehicle to operate in more challenging environments.

Built With

Share this project:

Updates