Inspiration

Space exploration is becoming more and more important to the future of technology. Space exploration is done mainly with satellites, and these satellites have a docking mechanism. The tip of the satellite is an extremely important part of docking the satellite in a station such as the ISS. This operation of docking, however, often requires a lot of supervision and the whole process is prone to human error and can be overall difficult. A mistake in this process would cause huge problems and this process uses too much manpower anyways.

What it does

My project automatically detects the tip of the satellite. My object detection final model in particular with my limited computation resources processed 87 images in about 3 seconds giving us a framerate of almost 30 frames per second! This mimics real-time video, which means this project could play a major role in real satellite docking.

How we built it

I used Roboflow to make my dataset. This includes annotating the data by hand, applying transformations, and preprocessing the data into YoloV5 form. I used the pre-trained YoloV5 architecture made by PyTorch and Ultralytics and in addition to that, I chose to implement my YoloV3 model in PyTorch with the help of Aladdin Persson's YoloV3 from scratch implementation on youtube.

Challenges I ran into

I ran into multiple challenges through this project. I chose to challenge myself by choosing an object detection task that I have never done before. The first challenge I encountered was finding a dataset. A simple google search wasn't enough and even with some deep digging, I couldn't find anything. I had to resort to reaching out to individuals and universities and then finally I got my hands on a dataset consisting of frames of a mock satellite test flight. The second challenge I encountered was annotating the images. I knew I had to do it by hand but didn't know what software to use. My computer is relatively slow and cannot handle the heavy annotation softwares, and after a bit of searching, I settled on Roboflow. I still had to annotate over 1000+ images which took hours but I had to persist. Lastly, I trained a YoloV3 model from scratch and wasn't getting the results I wanted. I wasted tons of time on this idea just for it to fail. But I looked for different solutions and came across the YoloV5 pre-trained model which gave me better results.

Accomplishments that I'm proud of

I'm proud that I was able to accomplish what I wanted to. I am proud that I built an object detection model that can detect satellite tips in real-time. I'm also proud that I tackled I real-world problem with my solution.

What we learned

I learned a lot about how object detection models work and how I can implement them for my usage. I also learned about how to search for data that I need effectively. Searching for hours on Google won't help, you need to try reaching out to find the dataset you need. Another thing I learned was how to make an object detection dataset that can be effectively exported and used by models. Lastly and most importantly, I learned that deep learning can be used to solve all kinds of problems.

What's next for Satellite Tip Detection

I believe that the next step for the Satellite Tip Detection model is researching a bit more to find more types of data, train the model for longer, and speed up inference time. Once all of that is done I think either a UI needs to be implemented or I should reach out to someone who is more experienced in the field of satellite docking for more advice.

Built With

+ 2 more
Share this project:

Updates