While in recent times, the world has started moving towards pro-CS education, the fact is that buying computers is a distant dream for most students and educational institutions across the globe even today. In most developing countries, the ratio of CS students versus the number of computers available is highly skewed and most students are still learning programming via pen-and-paper.

In order to help solve this problem, we have built a cross-platform mobile application that extracts code from handwritten text and sends it to a Visual Studio Codespace that is managed by the educator. The educator can then use a mini projector with a single computer to teach and/or test using a whiteboard. We believe this shall help ensure that the lack of computers does not hamper any student’s learning.

What it does

The Xamarin.Forms mobile application captures a new image or selects one from the mobile’s gallery. After this, an Azure Function is utilized to call the Azure Computer Vision Read API. The extracted text from the Read API is then populated in a file that is pushed to a GitHub repo which is then opened in a Visual Studio Codespace.

How we built it

  • The Xamarin.Forms application was developed using C# and XAML
  • The Azure Function was developed using JavaScript
  • The code was extracted from the handwritten text using the Azure Computer Vision Read API
  • The populated file was pushed to a GitHub repo using the GitHub API

Challenges we ran into

  • We originally built the Azure Function in Python. However, we were unable to implement the Read API call. In order to solve this issue, we then recreated the entire function in JavaScript, with which we were finally able to implement the OCR functionality and extract code from handwritten text.

Accomplishments that we're proud of

  • We used an Azure Function to call the Read API instead of directly doing so in the Xamarin.Forms application in order to decrease overhead and increase modularity. This was a huge achievement as it was our first experience working with a serverless application.
  • This was our first experience using the GitHub API, which we used to update our GitHub repository with the file populated with extracted text.

What we learned

  • We learned how to work with serverless applications
  • We learned how to use the GitHub API
  • We learned how to work individually with Xamarin.Android and Xamarin.iOS for platform-specific permissions such as storage permissions for the Camera functionality
  • We learned how to design our application in an efficient manner to decrease performance overheads and increase modularity

What's next for CodeCapture Mobile

We have various ideas in mind for CodeCapture Mobile that we shall be working on in the near future:

  • Bulk testing of various students’ codes for which the results can be shown in the Xamarin.Forms application
  • Improve the accuracy of OCR by implementing a custom ML model
  • Provision for direct pseudo-code evaluation
  • Improve UI/UX of Xamarin.Forms application to make it more user-friendly

Built With

Share this project: