Inspiration

In the wake of the COVID-19 pandemic, more people than ever suffer from poor mental health and social anxiety. Being out-of-state freshmen at a large public university, the founders of OpenTalk have experienced this social anxiety firsthand: we've all stressed over how we would build new connections in our new environment after a mostly-online high school experience. In addition to regular interpersonal connections, we've also had countless interview, presentations, meetings with professors, and more higher-stress encounters.

With all of these situations, we handled our stress in different ways: pacing back and forth, thinking to ourselves, talking to a wall, etc. If only there were an app to help us practice these scenarios...

And that's where we come in. With language-learning model technology reaching an inflection point, we pounced on the opportunity to use generative AI and augmented reality to provide a realistic environment for simulating social interactions that all users can find useful.

What it does

OpenTalk is an app that allows the user to practice social situations with an AI chatbot in augmented reality. Currently, we have implemented three scenarios which the user can simulate: a mock job interview, ordering coffee from a barista, and free conversation. We plan to implement more situations in the future, as well as customizable characters and voices.

How we built it

The core of our iOS app is built in the Swift programming language, using built-in packages for frontend design (SwiftUI) and augmented reality (ARKit). We used AVKit to interpret human language, and the OpenAI API to generate humanlike responses. Finally, the response from OpenAI was sent to the Google Speech-to-Text API, and spoken to the user.

Challenges we ran into

We ran into many technical challenges along the journey. One of the first issues we encountered was one of skills: none of us had ever used AR/VR technologies, nor 3D modeling software such as Blender. Secondly, we ran into many issues using the Google Speech-to-Text API, and getting our AR model to "speak" back to us at the end of the day.

Accomplishments that we're proud of

We are extremely proud to have produced a working prototype of the product we envisioned. It was exhilarating to see our AR model spring to life and speak for the first time.

What we learned

Over the course of the project, we learned many in-demand technologies: ARKit, OpenAI API, and SwiftUI to name a few. Besides technical skills, however, we learned more about what goes on behind-the-scenes in ChatGPT from OpenAI and what makes a successful idea from YCombinator in their respective HackerX sessions.

What's next for OpenTalk

  1. Improvements to AR models and animations for more immersion
  2. A more effective feedback system with additional features such as analysis of users tone and body language
  3. General quality of life changes and finalization into the app we envisioned

Built With

  • arkit
  • avkit
  • openai
  • swift
  • swiftui
Share this project:

Updates