Determining required prior knowledge content for research papers.
Current cutting edge technologies are highly inaccessibly to the general public. With my focus being in Machine Learning, I have first hand experience with this. One of my recent topics of interest has been Distributional Reinforcement Learning. This new method of RL will lead to significantly faster training speeds, but all of the current content exists exclusively within research papers. Ingestible content such as online courses or articles take a while to come out, and if you want to be on the cutting edge of these technologies such as Quantum Computing, Pharmacogenomics, or Machine Learning, research papers are a must. This platform aids individuals with this problem.
What it does
A research paper's title can be entered into the script, where the program will extract topics and provide basic information about those topics. Essentially, this provides the prior knowledge required for an individual to understand a specific research paper.
How we built it
I built this project using ArXiv API to get research papers, Azure Text Analytics to extract topics, Wikipedia API to get brief summaries of those topics, and Power BI integrated into a website to display the data.
Challenges we ran into
Power BI's lack of certain graphs was a big challenge I ran into. I was hoping to visualize the data in a 3D scatterplot, with tooltips to show the summaries, but this was not possible. Furthermore, Power BI's python integration was not perfect, and many bugs occurred during that process, which lead to me having to separate the python script and Power BI. Furthermore, Power BI's dashboards update once per day, meaning any data entered would onyl appear on the report after a day.
Full integration for users to enter research papers, and for those papers to go through the whole program, updating the live Power BI report without my intervention. This is very possible, and would lead to the system being fully autonomous.