Inspiration

The Google Home. We like the way it works and we thought we should work on something that can be used by people with their IOT and connected Devices.

What it does

It records speech from any audio source including Google Home, Amazon Echo, a Cell phone or a computer and converts the speech to text. Sentiment analysis is done on the text using Microsoft Text Analytic API and a score between 0 (saddest) to 1 (happiest) is assigned to the what was said. Next, we take part of the analyzed text and send it to Datamuse API to generate words that seem like, sound like, rhyme like the individual words passed to it. These words will be arranged together with an algorithm that makes them sound and work good together.

Features: Listen to Audio Converts speech to text Does sentiment analysis of text from speech sync lyrics to beats and uses rhymes and rhythms

How we built it

We built with love at Harvard

Challenges we ran into

A few issues with setting up and provisioning the three Microsoft API's. The other challenge we encountered was making the API's work together with the Datamuse API. Finally, we had limited time so we hardly completely polished the sound generation.

Accomplishments that we're proud of

It's something I'll use

What we learned

Start early. Build fast. Adapt.

What's next for JASS

The audio generation will be improved to make the resulting jam sound better. Additionally, we plan to to add facial recognition to detect emotions. In addition to that, we want to make JASS listen ubiquitously to users. Our plan is to bring JASS to Google Home, the Echo and the Hololens.

Built With

  • azure
  • bing-speech-api
  • microsoft-luis
  • microsoft-text-analytics-api
  • datamuse-word-finding-query-engine
  • vanilla-javascript
Share this project:
×

Updates