Inspiration
We wanted to build a project that combined hardware and software in a meaningful way. Using QNX on a Raspberry Pi stood out as a unique opportunity to explore embedded systems and vision-based processing together.
What it does
The system captures movement using a camera on a QNX-based Raspberry Pi, sends the recorded video to a server, and analyzes it to extract visual information.
How we built it
We used a Raspberry Pi 5 running QNX to handle camera capture and a Windows server to process uploaded video using OpenCV and MediaPipe. The two systems communicate over HTTP to form a distributed vision pipeline.
Challenges we ran into
We faced challenges setting up QNX, configuring the Raspberry Pi for the first time, dealing with dependency and compiler issues, and managing development across different operating systems. We also had to pivot our original idea late into the hackathon due to tooling limitations.
Accomplishments that we are proud of
We successfully set up the camera on QNX, established reliable communication between systems, and achieved consistent detection from recorded visual data.
What we learned
We learned how to work with QNX, configure embedded hardware, manage cross-platform development, and adapt quickly when technical constraints force design changes.
What’s next for PiVision
Expanding toward full-body detection and moving from frame analysis to real-time visual processing.

Log in or sign up for Devpost to join the conversation.