Inspiration

Augmented Reality and Spatial Computing is undeniably the future of computing. Let's make content creation as easy as drawing on paper.

What if you could design usable user interfaces for the real world on napkin! (Except they sold out of napkins during COVID19 so we are using scratch paper!)

What it does

It makes creating useful AR content as easy as writing on a napkin...

How I built it / How it works

Basically, the flow goes like: send an image of a wireframe and get back html!

Sample Input Data (image of a wireframe UI sketched on napkin extracted from mobile AR app)

https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/033/479/datas/gallery.jpg

Output Data (raw html response from Sagemaker mphasis)

View HTML @ https://gist.github.com/yosun/ec975b3c7438384858038b4ead5c0f24

Output Data (json from Rekognition)

View JSON @ https://gist.github.com/yosun/3bec4f7e0e75958322d23711d2638e1d

Combined results = sanitized html

Note, I have also tried matching the x,y and top,left bounding boxes in rekognition results to the mphasis results, but i am not sure how they are calculating their absolute UI coordinates, so I ended up just assuming order of match on assigning text.

https://areality3d.com/drawui/i/66

Combined results in AR!

See vid and pic above!

See the log for more sample data!

https://areality3d.com/drawui/datadump

Architecture diagram

architecture diagram

  1. Mobile AR app launches create mode.
  2. User takes a photo of their hand-drawn UI
  3. App sends photo via form submit to a gateway to be processed by

    A. Sagemaker mphasis Autocode converted to html.

    B. Rekognition DetectText to json

    C. The results are combined and processed and logged to S3.

  4. The response from the processed form is sanitized html

  5. Mobile AR app renders html in AR on top of the paper!

Challenges I ran into

Accomplishments that I'm proud of

What I learned

What's next for DrawUI

Built With

Share this project:

Updates