Inspiration

The inspiration behind the product was that current AI models are great, but they could be better only if they knew how I was feeling. At that point, it could be human in the sense that it delivers messages based off of social cues.

What it does

Our app streams a video feed from the computer to hume.ai to give chatgpt context in how to response. It tracks our facial expression, and based off of that, it will dynamically respond to given inquieres/requests.

How we built it

We built it using OpenAI's API, Hume's API, typescript, javascript, used a websocket for live videostreaming and capturing, and we used a combination of next.js, react, css, html, and tailwind to put it together.

Challenges we ran into

We ran into the challenge of figuring out how to use a websocket when using the live video stream. Another challenge we faced was combining the code at the end.

Accomplishments that we're proud of

We are proud that we have a MVP that we can demo at the end of the hackathon. Additionally, we are proud of our team's willingness to stay up in order to finish the product.

What we learned

We learned alot of new technologies and coding techniques. For example, websockets was new, and so was next.js. We also learned collaboration is important for success, deepdive into the docs if you are stuck, and make use of all the resources you have, because you may find insights by utilizing them.

What's next for AIMLearn

Integrating session id and Google Cloud API OAuth

Built With

Share this project:

Updates