Inspiration

Only little improvements for almost 2 years in LLM Chat UI / UX (e.g. Claude Artifacts, OpenAI GPT-4o-Canvas), Yet the technology is feasible to support Mixed Reality (e.g. Meta Quest 3, Apple Vision Pro)

What it does

A pure web based 3D world of Chat UI / UX, supporting multi-modality (text / markdown / code, vision / images, audios, and in the future videos) interaction with multiple backend LLMs, single and multi-user (group chat) in Chain of Thought, Tree of Thought and Graph of Thoughts

How I built it

Three.js (JavaScript / TypeScript), Mistral API via OpenAI python API

Challenges I ran into

Basic stuff like how to support multi-modality inputs (image upload / copy and paste), static HTML web page talking to backend Python services).

Accomplishments that I'm proud of

A working example of Mistral / Pixtral API call with backend and front end, a working 3D front end that supports chat interaction.

What I learned

Visual Studio Code full stack debugging with multiple languages.

What's next for Untitled

Continue to improve the code to fully supports more.

Share this project:

Updates