Sketches and 3D models act as a visual language and one of the ways to communicate with people. With these mediums we interact with the world and convey our ideas/concepts. Sketching is a basic way to represent and enhance one‟s creativity and 3D models helps in thorough understanding of concepts with its well elaborate explanation. However, it is difficult for the people especially children to draw a good realistic pictures and model complex objects in 3D. This project describes an interactive interface DrawN, which is used for developing and improving drawing skills and retrieving respective 3D objects based on sketch matching. Using naive a sketch or just an outline, it is possible for a child to design space with 3D object in space with help of self-intuition and machine intelligence.

What it does

The system involves input, output, computational steps to interpret user sketch and retrieve 3D object with compelling interactivity. DrawN works on the approach that users‟ cognitive/intuitive query sketch is compared with preprocessed line rendered view of 3D objects in database using HOG feature descriptors. With improvements, DrawN can be used as assistive device in education domain, design process, quick 3D prototyping and explanations, an information retrieval system based on abstract sketching or drawing as a medium of interaction.

How I built it

DrawN consist of computational steps and user interface. The main computational steps are (1) construction of database. We select best suitable view points and 2D images of object. We generate line rendering or line drawing by applying canny edge detection on views/2D images. We then apply HOG descriptor to represent each image in database in terms of feature descriptor. (2) In second step, abstract user sketch is represented by HOG feature descriptor and the user sketch feature descriptor is compared with each line rendered feature descriptor to compute the similarity index using cosine distance. (3) Top 10 matches are selected on the basis of maximum similarity index. And finally 3D model is retrieved from database by selecting one of the line rendering sketch in top 10 best matched line renderings. We describe here sketching interface of DrawN. Its physical interface is based on standard mouse or tablet stylus/pen. (1) The user starts by drawing in free form strokes/sketch on blank canvas. After finishing abstract sketch, user can choose to find the best matches to initial sketch and select desired line drawing or 3D model from database. (2) Or user can choose to select line drawing among best match result for tracing the shadow of it for practicing and improving the sketch. After tracing is over, a user can opt for retrieval of 3D model. It is an interactive process so from initial retrieval list user can choose the relevant image to further refine the query. This process can be iterated many times until user gets desired image. (3) DrawN also provides gaming experience and hence improving sketching by user engagement. After completion of freeform abstract sketch and tracing, user can redraw the same learned sketch on blank canvas and check for similarity score out of 100

Challenges I ran into

There are three major components in sketch based 3D model retrieval system and variation exists as per feature descriptors used. I) Feature extraction – Represent the abstract user sketch into geometric feature descriptor. II) Database of image and Feature storage – Create a database of 3D models, line drawing from models, encoded images using feature descriptor and store them. III) Similarity measure – Measure the differences between the query sketch and database images.

Accomplishments that I'm proud of

  1. Built an application in python using computer vision and graphics libraries
  2. Working interface which can be used on any system with some dependencies installed
  3. Conducted user interviews before design and tested with user study after building a prototype
  4. Presented project at International Conference India HCI.

What I learned

  1. Techniques and methods in computer vision and computer Graphics
  2. Application programming in Python
  3. System Modeling and Human Computer Interaction process

What's next for DrawN

  1. Sketch classification and improvement based machine learning using deep neural net. 2. Enhancing the GUI and improvising the use case

Cognitive abilities of children highly vary across different age groups and it is directly implied on sketching abilities. The appropriate age group of children would be 7-16 years for experiment. The goal is to develop an application that will assist the user to draw in a more natural way. It should also provide guidance to the user while drawing and enable him/her to create 3D models from mere simple 2D drawings. The future scope extends to storytelling, with an objective to provide an aid in education and to enhance ideation and visualization capabilities of children. With improvements, DrawN can be used as assistive device in educations domain, design process, quick 3D prototyping and explanations, an information retrieval system based on abstract sketching and hence drawing as a medium of interaction.

Built With

Share this project: