What it does

This project is designed to be an easier, cheaper, and faster alternative to the modern smart board. With SmartNote, there is no need for pre-installation other than a video capture device and a whiteboard-projector setup that most classrooms have. As the professor projects the notes document (example question or equation, etc...) onto the white board and writes notes on it, the camera records the video and sends it to our program along with the notes document. The program will then place each set of whiteboard notes within the document at the proper location (As displayed on the whiteboard).

How we built it

The entire project is built in three parts: the user interface, the graphical analysis, and the text document reading. The user interface is built using python3 with tkinter. For the graphical analysis, the python library OpenCV is used to highlight the whiteboard marker and document text, while ignoring all other elements. After that, images are returned when OpenCV detects that a solution (or set of writing) has been completed on the board. From this, the text document portion is read using Microsoft's Azure API for computer vision, and then the original document is searched with this text to find where each image is to be placed. All of this document work is done using _ python-docx _, which concludes with actually writing the images and saving the final edited document.

Challenges we ran into

One of the first challenges we run into is that the whiteboard is made out of glass and the projector with fainted image, both of which affected the image quality. The refresh rate of the projector matching the refresh rate of the phone is also a technical difficulty.

Accomplishments that we're proud of

Color-masking was accurate even with a poor projector and glass whiteboard by editing the contrast and saturation of the images to get the red whiteboard marker.

What we learned

Half of the team members were new to Python and began to learn the syntax of the language while integrating it with the GUI. In OpenCV, we learned about Haar Cascades and video processing, and how to apply these concepts to the project. All team members were new to learning Microsoft's Azure API tools. One main focus was to learn not only about the functions provided by Azure through its API, but also to work with sending requests, formatting headers, and parsing through the ultimate response to gather the data important to our project.

Built With

Share this project:
×

Updates