Inspiration
We will need a more dexterous way to interface with A.I.
What it does
The following features exist (albeit in a less than complete form)
- Text to speech in the browser with whisper and WASM
- Control Unreal Engine Blueprints with GPT4
- Control GPT4 with your body language and AWS Polly
- Collect OpenTelemetry data for fine tuning your models
- End-to-End testing with Playwright and Storybook
How we built it
I have glued together several existing codebases and taken extensive notes on opportunities for further development. The code exists on branches of the repo I've linked to. This could all be recreated easily in a day using the scaffolding I've set up.
Please notice that you can run most services using either yarn workspace
or docker compose up
. (Each branch of the repo will have it's own specific configuration.)
Challenges we ran into
It was a tradeoff between planning out a realistic roadmap for further development of a wide range of features, or building out a narrow set of features in a more complete way.
Accomplishments that we're proud of
I feel I have glued together a wide range of tools in a creative way. If this project matured, (and fully integrated with Unreal Engine), it would be able to generate a realtime AR/VR environment using your voice and body language as prompts. This is not to toot my own horn, but rather speaks to the power of technology and the wonders that lay over the horizon.
What we learned
Unreal Engine's Blueprint API is a great space to play in when it comes to generative AI and LLM's
Also, it would be good to have worked with some team mates!
What's next for 'AIIO
I've taken extensive notes and will continue to iterate on this project. Please see bookmarks.html
Built With
- amazon-web-services
- chrome
- codesandbox
- docker
- grafana
- graphql
- jwk
- mediapipe
- opentelemetry
- playwright
- poly
- postgresql
- storybook
- voiceflow
- whisper
Log in or sign up for Devpost to join the conversation.