Inspiration
we’re big motorsport fans especially formula 1 and wec. in those worlds a gap of just 0.1s can decide victory or defeat. that fraction of a second can come from a tiny visual imperfection a loose panel, a worn tire or a crack that no one noticed in time. that made us think: if precision matters that much on the track, why not everywhere else? so we built viscope, an ai engine that detects even the smallest visual differences because sometimes, what changes by a pixel can change everything
What it does
viscope is an ai-powered visual difference engine that
- detects, classifies, and quantifies visual changes across time-series images it can analyze frames from cameras, production lines, or inspection systems, and highlight what changed, where it changed, and how much it changed. using a combination of image alignment, difference mapping, and deep learning classification, viscope provides a heatmap overlay, a change percentage score, and a semantic label describing the detected variation.
mathematically, we define the \(Change\;Score\;(C)\): $$ C = \frac{N_{\text{changed_pixels}}}{N_{\text{total_pixels}}} \times 100\% $$ where \(N_{\text{changed_pixels}}\) is derived from thresholded difference maps.
How we will build it
- python + opencv for image preprocessing, registration, and pixel difference computation.
- pytorch for cnn-based classification of visual change types.
- fastapi backend for inference serving and api management.
- react + tailwind for an interactive dashboard visualizing results in real time.
- sqlite for storing time-series image data and computed change metrics.
Challenges we can run into
- aligning images taken from slightly different angles or lighting conditions.
- reducing false positives from reflections, dust, or camera noise.
- optimizing inference time to deliver real-time comparisons.
- creating a clear ui that communicates change severity without overwhelming users.
Accomplishments that we're proud of
- achieving high-accuracy visual change detection on noisy, real-world image sets.
- designing a dashboard that makes complex ai outputs intuitive for non-technical users.
- building an adaptable engine that can serve across domains from motorsport analysis to industrial inspection.
What we learned
we learned that visual difference detection is not just about pixels it’s about perception. teaching an ai to understand which changes matter and which are noise required deep experimentation with thresholds, alignment algorithms, and illumination models. we also discovered the importance of real-world context in training perfect lab data means nothing without messy reality.
What's next for viscope
- expanding viscope for live video stream analysis instead of static frames.
- integrating edge deployment using
onnxfor drones or track-side cameras. - extending applications to manufacturing, infrastructure health, and environmental monitoring.
- long-term vision: a universal visual audit layer that continuously monitors the world for meaningful change from racetracks to cityscapes.
Log in or sign up for Devpost to join the conversation.