Inspiration
We were inspired by the subreddit r/explainlikeimfive as well as the 5 Levels video series by Wired. With these inspirations in mind, our group wanted to make complex and difficult to understand topics accessible for everyone without the need to ask a real life expert. We also expand on this idea by allowing not only text input prompts, but video and pictures as well.
What it does
ELI_ is a web app geared towards users of all levels of education. ELI_ takes text, image, or video prompts and explains them to the user at their preferred level of understanding. This makes all available information more accessible to all of ELI_’s users for free. From five years old to an expert on a given topic, ELI_ will explain any subject to any person to make difficult to understand material easier for them to grasp.
How we built it
Our team used a variety of technologies, almost all of which were new to us. The project was created with React.js and our backend is made with Node.js. Our backend also utilizes Google Cloud APIs including the Cloud Vision API for text detection in images, the Cloud Video Intelligence API for video transcription, and the Cloud Storage API which helps organize the input files. The prompts derived from these inputs are then fed into the OpenAI API using the ChatGPT 3.5 Turbo model to explain the information to the user’s specified level. The backend also uses a MongoDB database to store the explanations, which ¾ of our team had not used before. In addition to using all of these technologies to build the web app, we also used Figma for some conceptual designs and for our presentation.
Challenges we ran into
We had trouble setting up the environments for everyone, which included sharing secret API keys and ensuring they weren't posted publicly, but after some mistakes, we successfully set up the environments and integrated all of the APIs together.
Accomplishments that we're proud of
We're proud of how we finished the entire functionality of the project within the 36 hours and how well everything works.
What we learned
Since none of us had used any of the APIs we all had to learn not only how to set up the APIs but also how to communicate between them and integrate them smoothly. We also all learned how to work with MongoDB databases in JavaScript and integrate that in with the APIs we were using. Multiple members of our team also learned how to use new JavaScript frameworks.
What's next for ELI_
Though we essentially have all the features of ELI_ implemented successfully, some things we would like to do in the future would be to improve the UI and make it more consistent with the designs in Figma, automatically merge similar entries in the MongoDB database to save space and reduce redundancy, and improve text processing to ignore useless text, especially in image and video inputs.
Built With
- css3
- figma
- google-cloud
- google-cloud-storage-api
- google-cloud-video-intelligence-api
- google-cloud-vision-api
- html5
- javascript
- mongodb
- node.js
- openai-api
- react

Log in or sign up for Devpost to join the conversation.