SIGN DETECTION
In progress project to take video and identify ASL letters. For now, using Gaussian, Canny edge detection, and contours to pick out longer edges in a frame from webcam and running that through Tensorflow's Inception v3 to identify letters. The algorithm was partially retrained using sign language pictures from a variety of sources after being processed in the same way as the video feed is. The algorithm only detects static signs, meaning the letters of the alphabet minus J and Z, which involve movement. It was trained on a set of 3000+ reference images of signs by a set of different people at different angles, lighting, and zoom.
This is still very non-functional and essentially not demo-able, but at least I got something to show for this weekend.
Log in or sign up for Devpost to join the conversation.