Inspiration

We live in a society that is dominated by technology, and as it gets cheaper to manufacture it will naturally become more prevalent in our lives. A consequence of this is an increased need for tech-literate citizens. The main way we interface with technology today is through software. This means educating today's youth on how to understand and create these programs is extremely important.

What it does

Our system allows learners to draw simple flowchart-like diagrams that can be parsed with Computer Vision and converted into a Python script. With a minimal set of shapes and instruction that represent basic programming principles, users can create simple scripts that can be ran from the Python interpreter. The goal is to allow a more tactile learner to steadily transition into the Python ecosystem by introducing them to basic syntax that results from their drawings.

How we built it

We used a combination of Computer Vision APIs, a UWP app, C#, and Python. For Computer Vision we used Microsoft Azure Ink Recognition API, OpenCV, and Google Cloud Platform's Cloud Vision API.

Challenges we ran into

The Computer Vision technologies that we used were inconsistent in many places, which made it hard to recognize certain symbols we wish we could have used, like <, >, and =. Some letters and number would be jumbled or mismatched. As for shapes, the recognition is sometimes really weak, and depending on the API, will often mistake shapes for variants of themselves (a square might be seen as a rectangle, diamond, rhombus, trapezoid, or simply a quadrilateral).

Accomplishments that we're proud of

It was very difficult, but eventually we made great strides to be able to use both mediums, images of drawings and tablet drawings in our system. Since we had different output from the multiple APIs we were accessing, we ended up creating an intermediary JSON format the could be parsed into Python by a language engine script. We then created parsers for the different outputs of APIs we used that could spit out our intermediary data.

What we learned

We learned many lessons about proper use of Computer Vision APIs and cloud computing platforms, as well as specific design strategies when planning for drawings meant to be fed into computer vision algorithms.

What's next for EduCode

  • EduCode could benefit from additional UI work, to make it easier for non-techy users to install and use.
  • The specification could be extended to allow for additional shapes and instructions as the Computer Vision algorithms become more accurate.
  • A more rigorous implementation of the intermediary parses could help accompany additional features while validating input and giving hints to the user for syntax errors.

Built With

Share this project:

Updates