Inspiration

Our main inspiration for our project was the rising prominence of drone usage for delivery. We noticed that drone delivery was being used or developed for numerous applications including Amazon’s developing drone delivery and drones that deliver medical equipment to medical facilities that aren’t easy to reach in a short period of time. This project was also inspired by two other ideas that we initially had for this project. Initially, our group was stuck between two main ideas: making an astronaut helmet with a full 360 display using cameras or making obstacle detection software for pilots to use to avoid collisions. We decided to compromise on the two ideas, which is why our idea uses both a camera display that can rotate 360 degrees and a collision detection machine learning model. The holistic rise in AI throughout fields of industry prompted our desire to incorporate machine learning capabilities into the system. The overarching goal of machine learning, which could ultimately be realized given more time outside of this hackathon period, is to have the camera-mounted system identify obstacles and aerial obstructions and theoretically capture images of them, gradually training the drone network to know what to avoid.

What it does

The main purpose of CAVIDS is to detect objects and determine whether they will collide with the vehicle that CAVIDS is mounted on. Initially, we were going to use a 360-degree camera for the detection solely, but we only had access to one camera, which didn’t have 360-degree capabilities. Instead, we decided to use a servo to rotate our camera, so that it was still capable of viewing at every angle. However, this didn’t completely solve our problem, so we decided to mount ultrasonic sensors at multiple angles underneath the camera. This way, the ultrasonic sensors will detect if there is an object within a certain range of the device, and then the camera can rotate in that direction to view the object. Once the camera is focused on the object, our machine learning model will identify the object, determine the object’s relative position, and determine if the object is moving closer to the camera. This information can be used by the vehicle to determine if it should change its trajectory to avoid a collision, improving autonomous capabilities in dense environments.

How we built it

In order to attach the camera to the servo, we 3D printed an adapter and connected the camera to the adapter with zip ties. We then attached the adapter to the servo with glue. The servo was then attached to our case with zip ties. The raspberry pi, arduino and power source were all also mounted to the case with zip ties. For the machine learning model, we used an ultralytics package in the Python terminal in order to run Yolov5, which is an already existing machine learning model that could identify many objects. Then, we added data for potential obstacles, such as trees, power lines and birds, to the Yolov5 model by doing 100 cm. To build our device, we acquired an Arduino UNO, an Arducam 5MP camera module, four ultrasonic sensors with a set range of 100 cm, a servo motor, necessary jumper cables, and the 3D printed casing to house many of the parts. The primary casing held the four ultrasonic in four directions, with a servo motor mounted in between the sensors that holds the camera on top of it using a small 3D printed casing. The Arduino was wired to each component and zip-tied in place. For the image detection software, the camera uses a ultralytics YOLOv8 program through OpenCV to identify images.

Challenges we ran into

We faced many challenges in the development of CAVIDS. The first issue we ran into was not having a 360-camera that would have allowed us to simplify the rotational process significantly. However, this pushed us to innovate further, allowing us to develop a more creative process. Another major issue that we ran into was having constant trouble attempting to make the camera fully functional. If we had just had a better camera or the correct cord to attach the camera to our devices, we would have been able to create a fully fleshed-out project that completed the mission goals established earlier on. However, due to our lack of proper resources, we ended up not being able to operate the camera and Raspberry Pi 4 as we had hoped. We initially had a complete Raspberry Pi system that was operating with a mouse, keyboard, and monitor. The intent was to host a machine-learning operation on the CPU that would have used a trained data set of planes and trees, adding to the already existing capabilities of YOLOv8. However, due to the camera restrictions and technical difficulties in connecting the camera to the Raspberry Pi, we had to abandon the Raspberry Pi architecture and opt to take a much simpler approach by hosting the system on a laptop.

Accomplishments that we're proud of

The main two accomplishments we are proud of are that we were able to effectively develop a 360-degree camera system that is able to rotate and capture its surroundings when necessary. Through the limitations imposed on us, we were able to push past technology boundaries and create an innovative solution to our problem and lack of resources. Another of our main accomplishments is the successful, albeit inconsistent, use of object detection software in YOLO and OpenCV. This process involved sophisticated use of the terminal software, requiring careful attention to detail. Our efforts on this side resulted in an easy-to-read output of the objects the camera detects in a readable format for hypothetical data collection and aggregation into an ML model.

What we learned

We learned how to successfully incorporate YOLOv8 and OpenCV into a camera and theoretical developments in . We also learned how to run a servo and ultrasonic sensors on an Arduino UNO microcontroller.

What's next for Camera Automated Visual Integrated Display System (CAVIDS)

For CAVIDS, we want to upgrade the existing version that we have now with a few modifications that will make the device better and more convenient to use. First of all, we want to replace our ultrasonic sensors and servo spun camera with a stationary 360 camera, which would not only be more accurate, but it would also save space. On the topic of space, we would also like to redo the packaging of the device to make it smaller so that it would be more convenient for mounting on smaller vehicles, like drones. Additionally, we would like to make machine learning more accurate in order to increase CAVIDS’s ability to help aircraft vehicles avoid collisions.

Built With

Share this project:

Updates