All of us are passionate about web development, and we wanted to shorten the gap between ideation and creation.
What it does
Users draw the layout of a website on a sheet of paper. The user's phone is positioned on top of the acrylic mount where the phone camera records each iteration of their sketch in real time, which is then translated directly to a live website, displayed on the user's computer.
How we built it
Mount We first designed a mount in SolidWorks and laser-cut an acrylic board, assembling the pieces together with acrylic glue.
Mobile App Using react-native and our smartphones, we built an app that intermittently takes photos and sends them directly to our flask API (the mobile app is complete with a zoom feature ;) ).
Image and Text Processing Given an image of the paper website layout from the mobile app, we process it and isolate each of its components using OpenCV. We then differentiate between types of inputs (storing text with tesseract), and we extract properties of these elements that allow us to properly reformat them to be converted to HTML.
Web App The web application consists of two major players: the API for the backend and the front-end display of the current captured image and translated HTML. The backend contains the image and text processing functions, the http request specifications for receiving the image as well as displaying the website, and the code that translates the image information into HTML.
Challenges we ran into
Originally, we planned to connect a raspberry pi with a camera to take photos and process them. However there was only one raspberry pi compatible camera (which was taken), so we decided to switch to using our phones and developing a react-native app. UPenn uses a LAN instead of a WAN, which prevents expo (which is running react native) to communicate with the computer containing the react code. Due to a lack of materials, we also initially were using tape to hold the mount together, which fell apart many times.
Accomplishments that we're proud of
We are proud that we were able to integrate a bunch of different parts that we were unexperienced with coherently.
What we learned
We learned Flask, openCV/image processing, and react-native.
What's next for u&i
We want to make the website dynamic and allow users to launch their personal websites as well as increase the elements supported.