Inspiration
Learning made fun and interactive.
What it does
Autocompletes vector graphics from strokes and completes a doodle. Once you start doodling, the model comes up with all possible ways to finish it.
How we built it
Backend - (Python, node.js, OpenCV, RNN)
- The backend captures frames from the default webcam and applies color thresholding to detect a pointer.
- It then uses contour detection to identify and draw a circle around the most prominent contour.
- The center of the contour is used to track the movement of the pointer, and the detected points are stored in a deque. These are then sent to the front end to plot.
- The program terminates after several consecutive frames without detecting a pointer. Frontend - (React, p5.js, ml5.js , Tensorflow.js)
- The code includes functionality to clear the canvas and draw lines of different colors.
- Interactive user UI to help with remaining strokes to complete a doodle. ## Challenges we ran into
- Sending continuous streams of data from python to node.js
- Integrating p5.js, ml5.js and react.
Accomplishments that we're proud of
- Interactive UI.
- Differentiating a particular color's HSL values to capture movement/Strokes.
What we learned
- Development life cycle
- Integrating various components of frontend and backend
- RNN
What's next for Doodle with AI
- Coloring doodles.
- Image to Image Translation using stable diffusion
Log in or sign up for Devpost to join the conversation.