Inspiration
As disasters develop, it is hard for decision makers to separate truth from fiction and understand what is happening live on the ground. However, individuals near the disaster zone have smartphones for recording and publishing information. We wanted to develop a tool that could bridge this gap.
What it does
Individuals can submit images-text pairs which are analyzed using Google ML APIs. These can be automatically clustered into disaster events based on location, time, and image content. These images are combined into a 3d model using photogrammetry, which can be viewed using a VR headset.
How we built it
Several Google Cloud Functions automatically trigger when a file is uploaded. They analyze the files and record the results in a Firestore document. Another Cloud Function, triggered by HTTP requests, goes through the Firestore collection and gathers related images. The 3d models were created from stills taken of UF landmarks and combined in meshroom. The 3d model is loaded in Unity and then create an Oculus scene. The scene can viewed using the Oculus Rift headset. We edited Unity SDK files to change the controls, among other things.
Challenges we ran into
We were inexperienced with a number of technologies: Google Cloud, photogrammetry (stiching together images into a 3d model), and the Oculus Rift.
Accomplishments that we're proud of
The 3d models, though not perfect, strongly resemble their real world counterparts. The Cloud Functions dasiy-chain smoothly and finish in ~1 sec.
What we learned
We learned a lot about Google Cloud permissions (make everyone an owner!).
What's next for DizViz
Next steps are:
- moving the photogrammetry and 3d model creation to the cloud
- increasing the sophistication of our ML pipeline
Built With
- blender
- google-cloud
- ml
- oculus
- python
- serverless
- unity
- vr
Log in or sign up for Devpost to join the conversation.