Webapp to navigate through buildings
Three of our team members do not attend UTD, and as such had difficulty finding where our last team member was within the JSOM building. These three members wandered about the JSOM building for quite some time, confused as to exactly where room 12.214 was - the location of where the fourth member had set up our hacking station. They thought it was on the twelfth floor; it made sense as the number before the decimal tended to indicate which floor a room was on. We thought that these problems could be solved or mitigated using an application, a path-finding app for finding rooms in buildings that would display directions to rooms.
What it does
The webapp takes in a floor plan image and processes it to become an intractable interface. On a simple display, any user can click on where they are and their destination on the floor plan. Using processed data, the app generates an optimal path to that location.
How we built it
- Remove color. This can be done by passing the image through a quick function:
- Sharpen image. This is done through an image convolution filter with the matrix
- Resize image. This is done to speed up the text recognition algorithm.
- Remove text. This is done by using the optical character recognition package tesseract.js and identifying where text lies within the image and drawing white rectangles over them.
- Add weight to cells. For our path-finding algorithm, we're using A* on the smaller version of the image. Our implementation prefers cells that are the furthest away from walls, therefore avoiding any narrow hallways or doorways if possible.
Challenges we ran into
Deciding on a proper development environment was quite a challenge for us. The initial idea was way out of our scope, and we had to limit our scope some. We started working in Unity for easy porting to Web, iOS, and Android, but we quickly realized that Unity was too complex for what we wanted to do. We eventually decided, after much debate, to switch frameworks to Angular CLI and build our application as a Webapp.
We tested multiple algorithms to process the image. First, we tried vectorizing the image, but it left too many artifacts. Then, we made a function that ranks all pixels based on how far away from a wall it is. Based on those ranks, we theorized that the voronoi expansion of all local maximums would yield a pseudo-accurate distinction between individual rooms. However, this led to too many extra rooms getting detected. Here is an example of the second algorithm:
For our third algorithm, we tried gaussian blurring the walls, instead of manually ranking the cells. This turned out slightly better, but still not plausible:
For our fourth and final algorithm, we discarded concepts of rooms in general, depending on the cells themselves. All cells were given weights similar to the second algorithm, and the weights were directly used in the A* algorithm. This turned out to work the best.
Several hardware issues came about during our development cycle. One of team members accidentally messed up the core dependencies for several of the necessary libraries and packages and spent several hours reinstalling and repairing various versions of Visual Studio. Next time we hope this team member manages his dependencies better.
Accomplishments that we're proud of
Learning about and implementing various algorithms are something we are all incredibly proud of. From image convolution functions to the Ramer-Douglass-Peucker algorithm, we are amazed that we could implement everything in only a period of 24 hours.
Working on such an extensive project, with so many elements, for the first time was a thrillingly difficult challenge, and managing and putting together each aspect was deeply rewarding, especially when the project came together at the end.
What we learned
We were all learned how to build a Webapp using Angular 2 and discovered the challenges of group projects: merge conflicts. We leaned about Ramer–Douglas–Peucker algorithm and how to implement it. We discovered how to do image processing and the associated difficulties involved with such an undertaking.
What's next for FloorNav
Our original idea still stands. We hope to one day develop an AR system that can show the user's location on the map which will bring about live directions like a GPS or Google Maps can. This functionality was entirely of our project scope for this event as we lacked both the hardware and experience to implement AR.
Currently, images are uploaded and analyzed, but we want to develop a feature that allows a user to take a photograph directly from the application and analyze that. This brings more practicality to our application and increases ease of use.
This initial version of FloorNav was built for web due to the ease of development. We hope to be able to port this application to iOS and Android for much more practical use.