Join me as we blur the lines between performers and audience members. Enhance your concert experience by increasing the interactivity between you and the visuals.
What it does
Dynamically changes visuals based on audience motion.
How I built it
This project was generated using OpenCV 3 and Processing. The project was pipelined through 3 steps, reading in a video stream, analyzing the video and producing input parameters for visual, and producing the visual.
Challenges I ran into
A major challenge was latency between each step in the pipeline. This latency can be seen when the code is run, because the video is read in much faster than it is processed. Due to this fact a producer consumer model was used to parallelize the image processing; however, there are still latency issues. Having faster machines would have definitely helped with latency issues.
Accomplishments that I'm proud of
We actually stayed up all night just coding which was pretty impressive. The overall design process and our process of iterating and adapting when hit with roadblocks went well.
What I learned
We came in with very little knowledge of Processing and OpenCV and are leaving with so many new ideas to implement in the future using these tools.
What's next for croWDSuRF
Catch us at Coachella!