In our coursework at KU, our professors rely heavily on whiteboarding as an educational tool. Students get hands-on experience with thinking through and solving difficult problems. But the biggest disadvantage of whiteboarding is that the user cannot see exactly what their code will do without typing it into a machine.

What it does

There are two mains portions of this hack:

1) The Hardware Hack: A smart whiteboard for compiling code into an executable file. Students use a makefile to compile their code, just like they would in a lab.

2) The Software Hack: A website for assigning,testing, and evaluating in-class programming exercises. A unique feature of this is that it allows a professor/interviewer to create a template with a few key lines missing, so that a students' whiteboard code can submit the lines. (i.e. a professor can design a class, and a student can implement and test a specific method)

How we built it

Each of the tools we used contribute to a final product that we are proud of:

Google Cloud: Unification platform for cloud technologies.

Google Vision: Artificial Intelligence used to detect text. Google Compute Engine: High Performance salable virtual machines for a quick backend and web hosting.

Angular: Front-end web framework for our software hack.

Flask: Handled web APIs

Python: Used to design website back-end and hardware scripts

DragonBoard 410c: The IoT device that allowed for classroom hardware integration.

OpenCV: Used for image processing with both the DragonBoard and Google Vision Used to register our domain,

SQLite: Used to build the database for our website.

Challenges we ran into

1) Hardware Hack: The image is captured on the DragonBoard using Python2, and processed using Python3. To solve this we kept the two programs separate and added them both to a makefile to be executed in terminal. The webcam produced low quality images, which were processed inconsistently, and the webcam mount (pictures included in slideshow) was pretty unstable.

2) Software Hack: The biggest learning curve came when building the website getting the front-end and flask to work together. Our team had little experience working with flask, and we spend a lot of time familiarizing ourselves with this technology. We also had to learn how to effectively use Google Vision. We found out that we needed to turn our .jpg photos into scanned-like images by increasing the contrast and brightness and filtering out noise. This allowed the text recognition to work more accurately and precisely.

Accomplishments that we're proud of

We are proud of creating an application that can contribute to the improvement of Computer Science education. As programming becomes more prevalent, photoCode has applications ranging from social good to streamlining the hiring process. We are also proud that we could implement technologies that we had no experience with. No one on the team had ever used Google Cloud, Artificial Intelligence APIs, Flask, Image Processing, the Linaro OS, or a DragonBoard.

What we learned

This implementation challenged both our hardware and software capabilities. We struggled with hardware and had to learn how to take photos using OpenCV and how to call Google APIs from the DragonBoard. On the software side, we learned a lot about web development. We had to use flask to implement the API functionality we needed.

What's next for photoCode

As AI and OCR algorithms improve, so will our product. In addition, the Google Vision API will continue to train itself and produce better results. For the time being, we need to improve reliability both in our hardware and software. We can do this for the hardware by improving the resolution webcam and mount. We can do this for our software by continuously improving the design and user experience on the website. In addition, we would like to implement compilers and execution for a number of different languages. Our online compilation API JDoodle supports many languages, and implementing more of them would improve our product.

Share this project: