Augmented Reality and Spatial Computing is undeniably the future of computing. Let's make content creation as easy as drawing on paper.
What if you could design usable user interfaces for the real world on napkin! (Except they sold out of napkins during COVID19 so we are using scratch paper!)
What it does
It makes creating useful AR content as easy as writing on a napkin...
How I built it / How it works
Basically, the flow goes like: send an image of a wireframe and get back html!
Sample Input Data (image of a wireframe UI sketched on napkin extracted from mobile AR app)
Output Data (raw html response from Sagemaker mphasis)
Output Data (json from Rekognition)
Combined results = sanitized html
Note, I have also tried matching the x,y and top,left bounding boxes in rekognition results to the mphasis results, but i am not sure how they are calculating their absolute UI coordinates, so I ended up just assuming order of match on assigning text.
Combined results in AR!
See vid and pic above!
See the log for more sample data!
- Mobile AR app launches create mode.
- User takes a photo of their hand-drawn UI
App sends photo via form submit to a gateway to be processed by
A. Sagemaker mphasis Autocode converted to html.
B. Rekognition DetectText to json
C. The results are combined and processed and logged to S3.
The response from the processed form is sanitized html
Mobile AR app renders html in AR on top of the paper!