IRiS | Hack The Hill - CEED MAKERCON CHALLENGE
Image Recognition Integrated Signalling - Colour Detection and Dictation for the Visually Impaired.
Using Machine Learning, and the innovative minds of our team at Hack The Hill, we designed accessibility hardware & software to describe COLOURS in detail from live video footages.
Our Mission
Our senses can be restricted. Perspectives limited by uncontrollable, indiscriminate chance. Unfortunately, many people are impeded from their perspective, and the chance to experience the world with ease, in full. The goal of IRiS - of all innovation in accessibility - is to give people a better chance at having their perspectives realized in full. IRiS alongside CEED at Hack The Hill 2023 is working towards building many technologies and hardware with a trajectory towards better accessibility.
The Architecture
IRiS utilizes convolutional neural network based object detection, with a SVM classifier architecture for colour-text segmentation. Our SCLERA (Sight Capture Live Encoding Raspberry Aperture) is based on a Raspberry PI microcomputer that sends the continuous video footage via IP from a camera node to our server. The raw input from our SCLERA hardware sends our IP server a livestream, which is then preprocessed for colour correction and frame data. Convolutional Object Retentive Narrative Evaluation Architecture or CORNEA takes over for SCLERA, and focuses on a particular object in the frame. When focused, CORNEA takes an instance of the focused object, detected by our CNN, and is then reiterated on by a support vector machine
CHECK OUT THE GITHUB: github/IRIS
Log in or sign up for Devpost to join the conversation.