Inspiration
Orbis started from a simple personal frustration. Learning about the world online often feels fragmented, with a map in one tab, an article in another, and a video elsewhere. The sense of discovery gets lost in the process. I wanted to recreate the feeling of wandering, similar to the experience of spinning a physical globe to see where you land.
I drew inspiration from the spatial discovery of tools like Google Earth and Radio Garden, aiming to apply that same approach to history, culture, and science. For this hackathon, my goal was to see how AI could make a digital globe feel more interactive, turning it into something that can actively guide you and converse with you.
What it does
Orbis is an interactive 3D globe designed for exploring global places, historical events, and stories. Users can fly across the map, select locations, and read content explaining the context of that area.
I integrated Amazon Nova on Bedrock to act as a live in-app guide. Users can have natural conversations with the assistant, and when a specific location is mentioned, the globe automatically moves the camera to that spot, connecting the dialogue directly to the spatial interface.
I also used Amazon Nova to assist users in creating new landmark entries. By typing a name, the AI generates a description, finds the coordinates, classifies the landmark, and creates a structured entry in the database.
Additionally, I implemented a 3D landmark pipeline where Nova Canvas generates a reference image, which is then sent to Tripo3D to create a 3D model viewable on the map. I also included a live news layer that ties headlines to specific geographic coordinates, allowing users to jump directly to the location of a current story.
How I built it
I developed the frontend using Next.js, React, TypeScript, and Tailwind CSS, with Prisma and PostgreSQL for the database. CesiumJS handles the 3D globe rendering and camera movements.
On the backend, Amazon Bedrock is the core of the project. I used the Nova Pro model for the conversational guide and the AI-assisted landmark creation. The assistant goes beyond text responses by using tool-calling to find coordinates and control the globe's camera, integrating the AI directly into the spatial navigation.
The assistant is also multimodal. Image uploads are routed through Amazon S3 using presigned URLs and processed by the Nova model on the backend. For audio, I implemented Amazon Polly to give the guide a voice.
The 3D landmark generation starts with Nova Canvas creating an initial image, which is stored in S3 and passed to Tripo3D. The application then polls the generation state and renders the model once complete. Overall, AWS handles the intelligence, voice, media storage, and the initial phase of the 3D pipeline.
Challenges I ran into
One of the main technical hurdles was calculating which country the user was currently looking at. I had to build a location boundary detection system so the app could highlight the centered country and display the relevant data in the heads-up display.
Integrating the live news feed with accurate geo-locations also required significant refinement to ensure smooth transitions from a text headline to a specific coordinate on the globe.
The 3D model pipeline presented its own challenges due to the asynchronous steps involved. Coordinating Nova Canvas for image generation, S3 for storage, and Tripo3D for processing, while tracking the state on the frontend, required careful orchestration.
Lastly, making the AI conversation flow smoothly with tool-calling, spatial lookups, voice playback, and camera movements took several iterations to get right.
Accomplishments that I'm proud of
I am glad that Amazon Nova functions as a core part of the application rather than an isolated feature. By using it to drive the guide, build landmarks, control the camera, and initiate the 3D pipeline, the AI feels grounded in the user's workflow.
I am also pleased with how the different components, like the live news, the 3D models, and the AI guide, work together to support the central concept of exploring the world. The spatial nature of the AI, where it guides the user visually to the locations being discussed, is the outcome I am most satisfied with.
What I learned
Building Orbis reinforced that AI is more effective when given spatial context. The assistant works well here because its outputs are directly tied to coordinates and a visual interface.
I also learned a great deal about orchestrating multiple AWS services. Combining Bedrock for intelligence, Nova Canvas for image generation, Polly for voice, and S3 for media storage allowed me to see how these tools complement each other in a full-stack application.
Technically, this project improved my understanding of tool-calling, multimodal AI, async media pipelines, and geospatial application design.
What's next for Orbis
My next focus is to populate the globe with more landmarks, stories, and 3D models to make the environment denser and more informative.
I also plan to integrate live OSINT data feeds, such as real-time tracking for satellites, planes, and ships, to make the map serve as a live dashboard.
On the visual side, I am looking into upgrading the rendering with Google's photorealistic 3D tiles. For the AI, I want to continue refining Nova's contextual awareness so it functions even more seamlessly as a digital companion for map exploration.
Built With
- bedrock
- canvas
- cesiumjs
- clerk
- css
- html
- javascript
- next.js-app-router
- next.js-route-handlers
- nova
- open-meteo-forecast
- openstreetmap
- playwright
- polly
- postgresql
- prisma
- react
- s3
- sql
- tailwind-css
- tripo3d
- typescript
Log in or sign up for Devpost to join the conversation.