We wanted to make app programming accessible to anyone. We wanted to allow developers and designers to focus on what really matters. The product. We wanted a faster development cycle; no more coding and long iteration cycles. We were tired of porting apps from Android to iOS and vice-versa. We wanted to save time and put the fun back into "iterative functional prototypes", so we created Prototypr.

What it does

Prototypr instantly transforms wireframe sketches of mobile application UI into a functional mobile app. This allows for rapid prototyping of various UI designs so you can spend more time experimenting and iterating without coding any more boilerplates.

How I built it

The project consists of an Android app that takes pictures and uploads them to a Google Cloud Storage bucket. An application that resides on our computers is able to fetch these images and send them to a python image processing module powered by OpenCV. This module first detects rectangles in the image (these translate to containers in the final generated app). It then detects other more complex UI elements such as buttons, image views, and text views through detection of specific symbols in the image, and attempts to use tesseract for OCR on labels. The final output is a tree of UI elements described by a JSON object which is passed to another module that translates it into React JSX template language and a React stylesheet. We chose React, the Facebook-backed modern frontend framework, for its ability to build to all 3 major client platforms: web, Android, and iOS using a singular paradigm. The final, generated React file is then compiled and built into an Android application APK and installed to an Android phone.

Challenges I ran into

One of the hardest tasks in developing this project was figuring out how to effectively process user photos into structured trees of objects. Even after deciding on OpenCV as our main driver, no one had any experience with it and we had to learn while doing. As well, we found our ambitions of true OCR for notes and the like were hampered by easy to integrate solutions, and little time to train a neural network that would fufill our needs.

As well, none of the team members had experience developing React native applications and we spent a significant amount of time understanding the fundamental principles of React and figuring out how to build and deploy a React project into an Android application.

Accomplishments that I'm proud of

The team was able to utilize OpenCV to create an image processing module that accurately detects UI elements in wireframe sketches, regardless of users' drawing abilities or the lighting conditions of the photo. We were also very happy with the seamlessness with which we translated the JSON to React to an Android app. For a SDK that came out a week ago, we were very happy with how well we tied things together.

What I learned

The team learned both advanced image processing and the React development process and SDK. Since OpenCV is one of the largest open source computer vision libraries in the world, and React is one of the leading multi-platform frontend frameworks, we were very happy to dive deep into both of these technologies. From a soft skill perspective, we learned how to work quickly and iteratively in a small team while effectively splitting tasks as well as how sleep deprivation is one of those problems that doesn't go away.

What's next for Prototypr

Refining the image processing logic to detect more UI elements, more accurately, as well as implementing CAFFE (LeNet) for better OCR and handwriting recogntion. Generating more sophisticated and aesthetically pleasing UI, and giving users more flexibility with design options. We will also focus on multiple screens and the ability to create control flows and linking between screens.

Share this project: