Inspiration

After doing a lot of research for competitions and school, we realized that the research process was ultimately the same - you read background information and then delve into the specifics of the topic. It was a very tedious and boring task. We want to fix that.

What it does

Divide and Conquer is an application that uses AI, specifically NLP, to ease the research process. We allow users to recieve aid with their research by giving the users potential sources and images. For example, if someone is researching about Lenz's Law, he or she would begin by typing about a sentence before Divide and Conquer comes to rescue. Divide and Conquer then gives the user images that corroborate the idea of Lenz's Law and it would give scholarly articles that investigate it. This allows the user to access high quality research materials without the need for strenous research. Divide and Conquer will be there for your whole research process to guarantee the most efficient use of your time, allowing you to make time for your personal needs.

How it works technically

We take a summary of what the user has typed so far and query it into Google Custom Search Engine API while providing an array of relevant content. Once we retrieve all the relevant content, we filter the content by using uClassify API. (At first we attempted to use a LDA topic modelling algorithm, but we ran out of time to train a model). We then store all the content in a 3-node CockroachDB cluster on Docker. We do this to ease the time load on the program and to ease the strain on our query limits on Google.

How we built it

We separated into two groups - two frontend and two backend. On the frontend, we used Angular.js and Materialize. For the backend, we used Python, specifically Flask, and used the nltk library. We also used CockroachDB to store the results of API queries in order to save time when the next query is made. In addition, we used the uClassify API for LDA topic modelling (temporary replacement for an actual model). Also, we used Google's Cloud Natural Language API to do syntax analysis as well as Google Custom Search Engine API to crawl the web.

Challenges we ran into

Integration of frontend and backend together. We also ran into a bunch of issues with NLP.

Accomplishments that we're proud of

We managed to build a considerably working algorithm and a working application.

What we learned

We learned how NLP and NeuralNets work, especially the algorithms like LDA.

What's next for Divide and Conquer

Add an actual LDA implementation for topic modelling.

Share this project:
×

Updates