Inspiration

We're inspired first and foremost by the technology itself. This year was a special opportunity for us going in because of the depth camera. We challenged ourselves to push the technology as far as we possibly could, in a complex and meaningful way. On top of that, the way we interact with the technology is with math! The way that we get to employ some of our more theoretical skills in this project is exciting and so rewarding.

What it does

We are building a holistic aid system for visually impaired people. It consists of a multi-functional object detection system using computer vision and a depth camera. By default this is designed to spot obstacles in a user's path, but it can be prompted to recognize any objects that the COCO dataset can handle. When a nearby obstacle or desired object is spotted, information is relayed back to the user via our robust haptic feedback system. Peripherals include an alert pack for in case of emergency. When our IMU detects rapid deceleration, our microcontroller becomes a BLE beacon that can connect to any phone in roughly a 30-50 meter radius.

How we built it

  • Used OpenCV, YOLO Ultralytics, stereo camera triangulation, and dotmap projections for computer vision
  • Created a proprietary Python script for object detection and avoidance systems which is fully integrated with our Arduino script,
  • Built-in visualization of the user's orientation in space relative to the desired object. Includes CV2 arrows which send integer values that act as directional input.
  • Python will send directional data via serial bus to the Arduino Nano BLE Sense Rev2, which controls a custom motor driver and two motors.

Challenges we ran into

  • Loss from microphone data transfer over serial bus
  • Python version control issues with OpenCV and the Jetson Nano
  • Overheating in the Jetson Nano
  • The Jetson Nano.
  • Lacking computing power for a working Whisper speech-to-text converter and the object recognition software
  • Lacking in some basic hardware components like weak DC motors

Accomplishments that we're proud of

  • Creative solutions to our missing hardware issues - ex. ripping the motor from a servo to use in our haptic feedback system
  • Vector map decoding using telemetry to implement the object recognition with the RealSense depth camera
  • Real-time processing of IMU and microphone data
  • Successfully solving the logistical issues that come with the many inferences in our project. Computer vision, speech-to-text, hardware components etc.

What we learned

  • Signal processing skills, especially working with noisy data.
  • Basics of packetization
  • Soldering best practice
  • Manoeuvering in OpenCV libraries
  • Jetson Nano SDKs

What's next for SightSync

More features! The sky's the limit for this one. Because the technology is so broad and so modular, we can choose to go in any direction we want and develop for anybody we feel would be served best. We could further develop walking assist, build out the emergency response system, or train up the object recognition model.

Share this project:

Updates