Inspiration

We were inspired by the state of our national healthcare situation. In the previous year, where Covid-19 was rampant, there was a period of time where manpower was limited at the frontline of healthcare. As Singapore is moving towards being a smart nation, using artificial intelligence to assist our frontline workers piqued our interest as a project.

What it does

Our solution to obstacle detection using computer vision is to combine the use of existing technologies, ultrasonic distance sensors, and semantic segmentation to provide information about the robot's environment and deploying it using cloud technologies.

How we built it

We used many technologies provided by AWS. SageMaker for data labelling, EC2 for model training and deployment, Lamda for event triggers and automation , S3 for storage and much more. Furthermore, we have tested the use of different types of computer vision in python to understand the environment with vision, including, using YoloV5 models for object detection, mask rcnn coco models for instance segmentation and DeepLabV3 for semantic segmentation.

Challenges we ran into

This was not an easy challenge as my team was unfamiliar with Object detection. We started off with research about machine learning, where the first challenge was faced. As we could not test all the algorithms ourselves, we need to understand and figure out the suitable algorithm based on many articles. Other than this, one challenge is to make our solution unique. With a defined scope, coming up with ideas that would stand up among others is definitely not easy. We discussed and researched to try to find out features that we can include.

Accomplishments that we're proud of

First of all, we are definitely proud of what we have done. In a limited time, we managed to research and plan out the AWS architecture for our solution. As it requires an understanding of AWS services, it took us a while to be able to finish our idea. In addition, as at first, we were unsure about the direction we wanted to head in, the project turned out to be ideal as each of us contributed and completed what we planned.

What we learned

From all the research that we have done, we are equipped with knowledge of object detection and AWS infrastructure. We learned how the object detection model is trained, the differences between each segmentation algorithm, and information about the existing object detection model. Other than these, as we also utilize AWS infrastructure to build up our solution, researching the AWS services expands our knowledge about them. Examples of services that we have a better understanding of are the Amazon SageMaker, EC2, load balancer, and more.

What's next for Ultrasonic Sensor & Semantic-Segmentation Object Detection

We will continue to build and improve on our solution. We will start with creating the proof of concept for our ideas. This will help us to gain credibility for people’s trust and a basic prototype to perform testing. We will also seek advice from professionals. Accepting the criticism and recommendation, we can provide a better well-thought solution.

Built With

Share this project:

Updates