Inspiration

Bicycle riders sharing the road with vehicles are inherently at greater risk of injury than drivers. With all the sensors that have been outfitted on modern consumer automobiles for safety, we thought bicycles deserve the same consideration and features.

What it does

Much like modern automobiles, YOLObike uses a camera and LIDAR to scan the area behind the bicycle. The camera performs computer vision to identify vehicles and the LIDAR measures how far the detected vehicles are. You can think of it in polar coordinates, the origin is just behind the seat of the bike (where our sensors are located) and the camera provides theta while the LIDAR provides r. Using theta and r, LEDs on the handlebars (the UI) indicate the position of detected objects to the user. For demonstration purposes indoors, YOLObike reports the positions of people as well as vehicles, since humans are more easily maneuverable indoors :)

How we built it

To process camera inputs, we use a pre-trained single shot detector designed to identify both pedestrians and vehicles. The program is running on a raspberry pi with an Intel Neural Compute Stick (NCS), to increase the frames per second. Coordinates of the objects detected (if any) are send over a serial connection to an Arduino on the handlebars of the bike. In parallel, the Arduino is scanning the LIDAR and camera by sweeping the motor they are on. By combining the motor angle with the coordinates of detected objects, the Arduino lights up the appropriate LEDs on the UI.

YOLObike is completely battery-powered, and also broadcasts a Wifi network that allows the user to view the video stream. It is attached to the bike with hot glue and zip ties (no duct tape!). Enclosures and attachments are 3D printed.

Challenges we ran into

An inordinate amount of time was spent attempting to install software to train our own vision networks for the NCS. For instance, the raspberry pi didn't have sufficient RAM to run the compiler, even when allocating a portion of the SD card as swap space. Thomas' personal Ubuntu machine had dependency conflicts with packages used for his coursework so we didn't want to uninstall those. Finally we had to compromise and use pre-trained networks we found online. Additionally, we couldn't optimize said networks for our specific camera resolution or raspberry pi CPU, limiting our frames per second to less than half of what others claimed to have achieved with more system-specific tuning.

A major bottleneck we couldn't overcome was that the raspberry pi has no USB3 ports. The NCS is USB3 capable but is speed restricted by the USB2 ports on the pi. We suspect this is the primary cause of latency in our system, which achieves about 5.5FPS. The Arduino main loop and LIDAR refresh faster than that but only receive camera updates at a maximum rate of 5.5Hz. With more time, we could find/build a faster interface with the NCS to the pi's unused ethernet port or GPIO.

YOLObike is relatively power hungry between the motor and the neural network, leading to transient voltage spikes when the motor accelerates and decelerates. This forced us to use two separate battery banks so that the logic voltage remained steady despite the motor current demand. With more time we would have added additional regulation circuitry to one higher power battery.

Accomplishments that we're proud of

We are proud that we achieved the frames per second we did despite lacking access to most of the development tools Intel provides for its NCS. We literally unboxed the NCS the day before HopHacks began and didn't begin installing software until the hackathon was underway. Without it, users online report 0.5FPS on the raspberry pi CPU running object detection neural networks. Frankly we were caught unprepared to use the NCS under hackathon time constraints.

Despite this, YOLObike is quite robust and flexible. It is straightforward to change the motor sweep range for the user's preferred zones/blind spots. The computer vision is completely abstracted away from the Arduino, meaning the behavior of the system can be easily changed using just the Arduino without having to worry about the behavior of the raspberry pi. It also contains additional hardware that can be used or not used according to the user's preference. A buzzer can indicate when a vehicle is detected in certain zones, and an additional sensor that measures the angle between the handlebars and body frame. It would therefore be easy to adjust the area YOLObike is observing based on the anticipated trajectory of the bike.

Finally, we feel very lucky that Travis' homemade 3D printer didn't fail on any of its prints. It was extruding for almost the entire hackathon, and a failed print might have cost us a sensor mount or part of the UI.

What we learned

What can go wrong will go wrong, especially when working with hardware. In addition to the unexpected problems mentioned above, a few highlights include: the webcam isn't natively compatible with Linux, the raspberry pi began temperature throttling, and sometimes your hardware is so new that the forum posts from people with your problem are 2 days old and don't have answers yet. We also had some close calls - we are using every pin on the Arduino and only have 1 spare USB port.

What's next for YOLObike

One easy way to improve the system would be to use a camera with a wider field of view. This way we could capture the entire area behind the bike and only rotate the LIDAR as necessary. As previously mentioned there is also much room for improvement in the computer vision.

A weakness of YOLObike is that the UI is fixed to the handlebars, meaning it may not always be in view of the user. We have a buzzer to help here, but a better solution is wireless integration with the biker's helmet. LEDs placed in the user's peripheral vision mean they won't need to look down to receive information from YOLObike. This would certainly be a fun challenge for future development

Built With

Share this project:
×

Updates