Inspiration

Too often, education today is a one-size fits all. This leaves people such as auditory and visual learners in the dark. With StudyPal, people that are unable to read or simply are auditory learners are able to learn in a fun, interactive, and accessible way.

What it does

StudyPal can be used both with the Android built in Google Assistant and on Google Home. Once the user launches the application, the program reads a welcome prompt and offers a choice of taking a quiz based on either macroeconomics, microeconomics, or biology. After the user chooses a subject, StudyPal quizzes the user out loud with a series of multiple choice style questions. The user can change the subject at any time.

How we built it

Highlight: StudyPal was developed using Jovo on JetBrains Webstorm IDE which integrates with Node.js. The project was configured on Google Actions Console, and used Dialogflow for natural language processing. We created an Agent (quizlet), under which we configured three intents: BiologyIntent, MacroIntent and MicroIntent. Training phrases were added to each Intent to account for as many variations of user utterances as possible. For example, the BiologyIntent included the training phrases "biology", "bio", and "open bio". If the user said any of these phrases, the BiologyIntent would be invoked. In the Jovo code, we created arrays of questions for each subject. When a student invokes the Biology intent, our Action will ask questions and accept answers until all questions are completed.

Configuration and technical details: On Google Actions Console we added a display name, “Study Pal”, which would be displayed in the public Actions directory when the Action went live. When a student says “Ok Google, talk to Study Pal”, our Action is invoked. Google Assistant and Dialogflow would work together to convert the speech to text, returning the default Welcome Intent preconfigured with a welcome message. The welcome text message is converted to speech spoken out by Google Home, waiting for user response. Once the user specifies which subject they would like to study, Google Assistant and Dialogflow again convert speech to text and identify the correct intent. The corresponding intent is then matched to our code which fulfills the user's request. If a match is found, the questions in the array will be sent back to Google Home, which will start another round of interaction.

It is worth noting that, due to the time constraints, our Action was not hosted in the Cloud, a standard practice in Action development. We instead used a locally run express server which came with Jovo. When we typed the command $jovo run, the express server http://localhost:3000 was started and a subdomain was submitted to Dialogflow.

Finally, we tested using the Google Applications app on both iOS and Android as well.

Challenges we ran into

Originally, we started developing this project as an Alexa skill. However, we ran into issues with the Lambda code on Amazon Web Services, and we had difficulties running the code on Amazon Alexa Developer. After much deliberation, we switched to Google. However, we still ran into issues with Google, as the basic format we were following did not allow us to expand the quiz to cover more than one subject. We decided to start the code from scratch, leading us to write the code for the entire program in JS. We encountered many problems with the code, especially with making sure the questions didn't repeat.

Accomplishments that we're proud of

We are very proud to have created a functioning program that runs as intended. Additionally, we are proud of creating something that people, especially those with learning impairments or special needs, can use along their educational journey.

What we learned

For those of us who were unfamiliar with Javascript, we learned an enormous amount about the differences between JS and Java, especially syntax and the differences in logic. One of the biggest differences that we quickly learned was that JS' variables did not have types, and were instead declared with "var", "let", or "const". This carried over to arrays and methods, which didn't have one set return type. After the hack was finished, we both learned so much about JS. All three of us had our first exposure to developing voice assistant programs, such as Amazon Alexa and Google Assistant. Working with Intents and including many variations of the same input was definitely one of the biggest takeaways from developing such programs, which established the connection between the user and the program.

What's next for StudyPal

In the future, we would like to expand StudyPal to cover more subjects to make learning more accessible and accommodating for people. Additionally, we would like to add explanations for incorrect answers, and possibly include helpful links/videos for further studying.

acknowledgements:

We would like to thank the Technica Hackathon team for all their hard work that makes our unforgettable experience possible.

Special thanks to the Technica mentors, especially Mark Mohades.

Built With

Share this project:

Updates