Augmented Reality and Spatial Computing is undeniably the future of computing. Let's make content creation as easy as drawing on paper.

What if you could design usable user interfaces for the real world on napkin! (Except they sold out of napkins during COVID19 so we are using scratch paper!)

What it does

It makes creating useful AR content as easy as writing on a napkin...

How I built it / How it works

Basically, the flow goes like: send an image of a wireframe and get back html!

Sample Input Data (image of a wireframe UI sketched on napkin extracted from mobile AR app)

Output Data (raw html response from Sagemaker mphasis)

View HTML @

Output Data (json from Rekognition)

View JSON @

Combined results = sanitized html

Note, I have also tried matching the x,y and top,left bounding boxes in rekognition results to the mphasis results, but i am not sure how they are calculating their absolute UI coordinates, so I ended up just assuming order of match on assigning text.

Combined results in AR!

See vid and pic above!

See the log for more sample data!

Architecture diagram

architecture diagram

  1. Mobile AR app launches create mode.
  2. User takes a photo of their hand-drawn UI
  3. App sends photo via form submit to a gateway to be processed by

    A. Sagemaker mphasis Autocode converted to html.

    B. Rekognition DetectText to json

    C. The results are combined and processed and logged to S3.

  4. The response from the processed form is sanitized html

  5. Mobile AR app renders html in AR on top of the paper!

Challenges I ran into

Accomplishments that I'm proud of

What I learned

What's next for DrawUI

Built With

Share this project: