A thing that does cool stuff
By Zach Trefler, Atif Mahmud, Alex Kitaev, Andy Bao, and Jack Bishop
We discovered OpenMesh and Cloudtrax, a hardware-software combination for intelligent mesh networks with integrated presence analytics. We realized that this technology could be implemented in location-aware systems that enhance user experiences of the real world.
What it does
Wifu is a location-aware system built on top of a mesh network. When users connect to the mesh network, the access point to which they are connected is known. This can be used for many different applications- _ exempli gratia, _ location-specific notifications or messages. For instance, Wifu can be used to create an augmented-reality museum. By setting up Wifu in an existing museum, Wifu adds a layer of user interactivity with the exhibits. At its core, Wifu monitors the position of wifi-connected users relative to a mesh network. By identifying user positions in a museum, we are able to tell what information will be most relevant to the user's experience. We can then deliver this information in real time to the user's wireless device.
This system reaches its full potential in the image scanner. Museum coordinators can give Wifu a list of the artworks and artifacts in the museum and the corresponding images. Then, users take pictures of an artifact, and it will be identified in the app. A page of information pertaining to the artifact will automatically come up, extending the real-world description with history, interesting facts, and relevant links.
The same technology, however, can be used for so much more. User-location-aware decision making has a plethora of applications in education, healthcare, industries, and in so many other places. Our technology could be applied to create intelligent schools, where announcements and notifications are shared with students based on where they are in the school. Our technology could be used to improve the efficiency of healthcare systems by making the physical locations easier to navigate. Our technology could be used with intelligent manufacturing, where wireless-capable robots can coordinate and synchronize much more easily, both with each other and with humans. A key point to remember is that our technology is not simply a location-finding system; it is built on top of a scalable, robust wireless mesh network which can provide reliable and fast wireless connectivity.
How we built it
We created a central controller server (using Sinatra) which receives and processes data on users' locations from CloudTrax's REST API. The controller processes movement events using generalized rules, which allow it to make decisions based on user movement. In turn, the controller exposes its own REST API, which allows clients to know where they are connected.
Images are loaded from the user's camera via the front-end, which bundles them into a POST request and sends them to a trained machine vision model. This then serves a JSON structure with the relevant classifications (the most confident of which corresponds to the artifact name). These results are used to automatically download and display information pages.
Challenges we ran into
One challenge we encountered was in our interface with the CloudTrax API, where we had to handle the inherent unpredictability of wireless connections and the myriad possible scenarios that could arise. For instance, users could receive signals from multiple access-points at the same time. Additionally, the API presented no reliable way to determine whether or not a user had disconnected from the network. Nonetheless, we persevered, and developed a flexible system that could gracefully handle these unpredictable events.
We also ran into the problem that the small numbers of images in the training set were far too few to create a generalizable image classifier. While the classifier was created and trained correctly, it resulted in a clearly underfit model. (This is because we were only able to take a couple of hundred pictures - image datasets work best with many thousands or even millions of images.) To get around this, we integrated Google's machine vision API to label images, without requiring the user to load external pages.
Yet another challenge, and perhaps the greatest one that faced us, was the dearth of caffeinated beverages that plagued us all. Without this potent elixir, our team’s productivity waned as the night went on. Nonetheless, we endured, drawing on reserves of stamina we never knew we had.
Accomplishments that we're proud of
We are proud of successfully navigating a difficult API environment and making a system capable of handling various unpredictable events.
We are proud of making several completely unrelated frameworks and systems work together as one cohesive unit.
We are proud of overcoming our disagreements with each other and working together as one strong team, so much stronger than the sum of its parts.
And finally, we are proud of surviving almost twenty-four hours of coding, with only a very limited supply of caffeinated beverages.
What we learned
We learned quite a lot about many different technologies: OpenMesh, Cloudtrax, Sinatra, REST, Angular, CloudBase FireFunctions FireBase CloudFunctions, Ionic, TensorFlow (with JS), TypeScript, et cetera.
However, JamHacks taught us things of much greater importance. We learned far more than we could imagine about the value of collaboration, teamwork, and communication; without those, we would have gotten nowhere. We learned about the importance of dedication, commitment, and perseverance, which are absolutely indispensable for success.
And finally, we learned the immeasurable importance of making sure whether your customized sorting algorithm sorts in ascending or descending order. (I still can’t believe we wasted half an hour on that. --Alex)
What's next for Wifu
The many possible use cases for Wifu’s technology mean that our concept definitely has a strong future. From education to healthcare to industry, the platform we developed will never find a shortage of applications.