We're all students that write research papers day to day. We consume and process a lot of data. We wanted to make researching easier for everyone, and we think that we've accomplished that.
What it does
LeoResearchBot allows you to use your voice to interact with Amazon Echo to automatically research and summarize any topic you please. You can access your summaries and research sources via LeoPortal, a webapp we created from scratch.
How I built it
We used various frameworks, APIs, custom algorithms to create a unique and highly accessible virtual research assistant. The backend consists of a Python based Amazon Alexa skill, which we built to allow for interactions with a search spider and summarization algorithm we built. The TF-IDF (term frequency–inverse document frequency) based summarization algorithm creates a highly readable preview of the returned data, which is the fed back to the user via Amazon Alexa's voice, and the LeoPortal web app.
Challenges I ran into
Developing a summarization algorithm was damn near impossible. We went from framework to framework, university paper to university paper and struggled to conceptualise and build a Tensorflow model that could be trained in under 24 hours. Eventually, at around 4 am in the morning, we reached a solution, in a summarization engine utilising TF-IDF and various other algorithms.
Accomplishments that I'm proud of
We built an elegant solution for securely pairing user's Echo devices with our web portal, which grants near instantaneous access to user data. Additionally, with advanced frameworks such as bootstrap, NodeJS and Express.js, we were able to create a highly sophisticated web portal for users to access data. Lastly, our Alexa skill was built with an inadequately documented framework, yet we successfully make it work flawlessly.
What I learned
What's next for LeoPortal
Improve our summarization algorithm to come up with better results more suited for the users. Transferring and migrating our technology to work with other platforms and Siri