Our team of three Abhishek, Mudit, and I are all undergraduate or recent graduate Computer science students. And as CS students, we've experienced countless hours of both writing out code for assignments, practice, and tests. In the era of COVID-19, it's incredibly difficult to share handwritten code, or replicate snippets of code from a textbook and message it to friends. This difficulty is what inspired our project.

About CodeVision

Our application CodeVision allows a user to click on a photo of a code that they want to test run and the text in that image is extracted using the Google Vision API. That text is then returned back to the mobile application where we are running that code with the help of Hackerearth API. The purpose of CodeVision is that if you see a code in a book or written by someone and you are curious to know what output will this code produce, you can quickly snap a picture using the app and run it and see what output it's producing. With this functionality you can run a handwritten code in just a matter of seconds instead of rewriting code in a text editor.

How we built it

The application was built in two parts:


The frontend of the application has been built using React Native. The reason for using React Native was to enable us to develop a hybrid application that can work on multiple platforms. The application is capable of taking the snap of the code we wish to run and then upload in to the backend deployed on Google Cloud for interpretation of text using Google Vision API. Once done the editable text is returned back to the application for editing in the code editor and then the code can be run using the hackerearth API.


The backend of the application has been developed using express and node.js. The backend works as a middleware between the react native application and the google vision and hackerearth API, thus creating a seamless connectivity between the two for proper functionality of the whole application. The backend has been deployed on Google Cloud and receives images from the frontend and connects to google vision API to convert image to text. Once done it returns editable text to frontend. On submitting and running the code the same backend creates a smooth connectivity between the frontend and hackerearth API helping in compiling and running the code and forwarding the response received from the hackearth API to the application.

What We Learned

Through creating CodeVision, we deepened our knowledge on React Native, Google Cloud APIs, and creating demos. It was a great experience working as a team and allocating various tasks to accomplish our goals, whether it be to set up the API hosting or flesh out the UI for the application. Additionally, our team included members from both India and the U.S., and it was a learning experience navigating time zones to work together. However, we had an amazing time connecting across countries around a common topic, and became close along the way.


Integrating Google Vision API using node.js and hosting the Node.js API backend on google compute engine was a bit tricky, using the react-native libraries for camera and storage led to some unknown errors that took some hours to resolve. Completing this complex project on time was a big challenge in itself and due to some hours missed we were not able to add our demo video.

What's next for CodeVision

In the future, we will be looking forward to adding some more functionality to the application such as code sharing internally, more detailed error handling, syntax highlighting. We are also looking to improve or UI/UX. Moreover, as the Google Vision API will improve so will our application making the application more reliable.

Share this project: