Covid-19 caused a disruption in education that has had a monumental negative impact on education worldwide. Students and educators had to rapidly adjust to a completely virtual learning environment whilst lacking many important tools to facilitate learning. Several major news organizations have reported that more middle school, high school, and college undergraduates are receiving failing grades than ever before. The fact of the matter is that many students have difficulty learning in a virtual environment, and lack the tools needed to succeed in a significantly less structured instruction environment. We (Nathan Cooper and Ken Koltermann) are both passionate about education (we both hope to become college professors once we graduate), and decided to leverage our respective skills in software engineering to create the Card Oriented Question/Answer system (CO-Q/As), a mobile app designed to facilitate asynchronous learning in a virtual learning environment.

What it does

CO-Q/As uses deep learning to automatically extract the most important information from a text source, and convert this information into questions and answer flashcards on your mobile device. There are three ways users can provide source information for CO-Q/As to make virtual flashcards:

  1. Copy/Paste or type a block of text directly into a provided text box. This is useful for students wishing to make flashcards for notes taken during class, or for selecting only certain parts of online articles.
  2. Provide a URL to a website. CO-Q/As will process the text of the website, and provide flashcards accordingly. This is useful for students if an educator assigns articles to read for homework.
  3. Provide a URL to a YouTube video. Our app will download the subtitles of the YouTube video, and create flashcards from the YouTube video transcript. CO-Q/As can be used by students to create flashcards from the transcripts (either manually or automatically generated) from video lectures for their classes, allowing for students that struggle to take notes in a virtual environment to learn on their own time.

How we built it

Front End

The front end of CO-Q/As was developed using the Flutter development kit, with the bulk written in the Dart programming language. We focused on the Android platform due to its high adoption and open nature. However, the Flutter development kit allows us to easily port our app to iOS in the future. Another reason for using Flutter was its existing set of libraries, which we used to quickly create a beautiful material design UI using Flutter’s material library. We also took advantage of the page_flip_builder and carousel_slider libraries for handling the card flip and deck scrolling animations. Flutter also has a huge collection of tutorials and documentation that we took advantage of for understanding how HTTP requests, persistent storage, and handling sharing intents are done in the Flutter ecosystem.


Currently, the backend of CO-Q/As is a Python Flask server running on a private network. The backend receives POST requests from the CO-Q/As mobile app, performs some (limited) input sanitation and cleaning, and sends the clean text to the model to produce question and answer pairs. Given the limited time frame to complete CO-Q/As (would have taken far too long to train our own neural network for this task), we used the Autocards' neural network by Paul Bricman to generate question and answers pairs from blocks of text.

To get the text from a website provided by the user, we used the Python Goose3 library. To get the transcript of a YouTube video provided by the user, we used the YouTube-DL Python library to automatically extract the subtitles from the provided videos. We used the webvtt Python library to aid in cleaning the transcripts.

Once CO-Q/As has the question and answer pairs, it puts the pairs in JSON format and returns them to the CO-Q/As mobile app.

Challenges we ran into

We ran into several (annoying) issues whilst creating CO-Q/As. They are as follows:

  1. Resource constraints: due to the model size and our limited computing resources (NVIDIA 1660 Ti equipped laptop), we found that we would exceed all available GPU memory whilst analyzing large (200+ word) blocks of text. We had to create a text chunking work-around to avoid memory issues. This won't be an issue with larger GPUs.
  2. Networking issues: In our setup, the backend ran on one laptop whilst the front end mobile application ran on an emulator on another laptop. This was so that the GPU of one laptop was fully dedicated to housing the model. With this network organization, communication over our private network turned out to be the best option.

Accomplishments that we're proud of

We are proud of several accomplishments:

  1. We are proud to be able to give back to the education community that fostered our desire to be educators.
  2. We were able to add in more features than we anticipated given the time frame, such as the YouTube transcript feature, and the ability to allow users to use the share feature in mobile apps to easily share video or URL links with CO-Q/As without the need for copy/paste of text.
  3. We are happy that we were able to deliver a solution to a important problem using deep learning by means of ubiquitous computing platforms.
  4. Teamwork makes the dream work. Nathan has experience with front end development, and Ken has experience with backend development. We stuck to our strengths, and constant communication was essential to our success.

What we learned

A lot of deep learning neural network models were developed as part of research tools. We learned how to adapt one such model and incorporate it into a minimum viable product.

What's next for CO-Q/As

CO-Q/As is something we would like to continue working on, and eventually deploy on the Android Play Store. The largest hurdle we will face is the cost to deploy a backend in the cloud with plenty of computing resources to handle requests from thousands of users. We first need to conduct a survey among middle school, high school, and undergraduate educators to learn how we can focus CO-Q/As to provide maximum learning benefit for students. There a number of tasks we wish to complete in the future for CO-Q/As.

  1. We want to explore ways to partner with cloud service providers and schools to provide CO-Q/As for free to students and educators.
  2. We would like to conduct a pilot study with CO-Q/As to determine if/how students benefit from such an app.
  3. We would like to add multi-language support to CO-Q/As.
  4. Tweak the neural network to handle less structured text (such as from auto-generated captions), as well as try to shrink the neural network. It would be awesome if at some point we could house everything on a smartphone.

Built With

Share this project: