We were inspired by the application idea from having trouble remembering the compound names. And figured it would be effective and intuitive to run queries and receive a result in real-time.

What it does

Simply tell the app the compound structure and it'll return the compound name("What is H2O?"... returns "Water" The user simply speaks into the phone to phone... the speech is then translated from audio to string using Bing Speech Api and we run that string into Wolframs powerful engine to return a result to the user

How I built it

We as a team worked as a TEAM! initially trained a model to recognize the intent of our speech. We processed the audio into text and used NLP to predict our intent. We had to deserialize our output files to a readable file and send for query into the Wolfram engine to return a desired result. We then built the UI to polish our product

Challenges I ran into

Getting our product from idea to life was the first challenge. As we narrowed our expectations for this hackathon we started running into problems integrating the APIs, then certain libraries weren't working or being uploaded. After hours of working towards solutions we had to drop some original features to find other ways while keeping our focus on original intent. It was a challenge with limited knowledge of the Java language and using the Android environment

Accomplishments that I'm proud of

When we started and brainstormed the project it was minimal. As we gathered knowledge attending the workshops we were inspired and decided to further our expectations to produce a great product! A product we could look back and if nothing else we could be proud of our ideas and actions! We implemented intelligence into our product with NLP and speech recognition couple of terms that each of us were afraid of or didn't think we'd approach anytime soon

What I learned

We grew, learned and overcame a ton of boundaries. None of us had any Android experience or prepared for the challenges. We learned to be more collaborative, on the technical side we learned about integrating APIs, working with Microsoft Cognitive services, Java language, Android environment, Speech recognition, Cloud services, Bots and NLP

What's next for comBot

We are going to try and take this app and further the development, potentially adding ML and AI to it, more interactive and responsive.

Built With

Share this project: