Our team spent half an hour interviewing T-Mobile retail associates to find a detailed list of problems they faced on a daily basis. In addition, we swung by the T-Mobile store in Tech Square to see if there were any hardware improvements we could make within a hackathon timeframe. We filtered this list of improvements down to a few features that we could then implement given our time constraints.
What it does
We have two main features built around helping our retail associates become "super-reps" by augmenting their understanding of the customers they interact with. The first section allows the store employees to see aggregated data concerning the customer reviews of their store and other T-Mobile stores in the area. This data is analyzed in terms of sentiment, topics, and overall takeaways so that a clear picture is painted from the raw data. This allows the T-Mobile store owner or employees to see how people feel about them with respect to similar stores so that they have an accurate gauge of their own performance, along with some areas of improvement.
The second main feature helps minimize fruitless downtime in the store. When we spoke to the retail associates, they mentioned that it was particularly frustrating when customers would schedule appointments to work through an issue and then arrive late. The associate would not be able to commit to other work if the customer was late because the customer may have shown up at any point. To combat this, we used Azure Maps to locate the customer and estimate their arrival at the store - if the customer was going to be late by 10 minutes, we notified the associate that they had a full 10 minutes before they needed to worry about taking care of that customer.
How we built it
We built these features with Azure's natural language processing libraries, as well as Azure maps and various python libraries. The information was presented using a Flask app, with many assorted libraries helping with various organizational and technical challenges. Finally, we hosted our Flask app on Azure for all to view.
Challenges we ran into
When initially trying out natural language processing, all the resources we found online made use of a library called spacy. We were not able to get spacy to run with our python configuration, so we had to find a workaround for every function that required spacy. Additionally, this was our first time using the Azure Maps API (or any maps API in general), so there were a lot of issues with us learning the best way to display maps with the highest information density possible. In general, however, the challenges we faced were general issues one may run into while working in industry or on class projects - it was good to know that the debugging skills we learned in classes were so directly applicable.
Accomplishments that we're proud of
We're quite happy with the analysis we were able to generate off our scraped data - we spent a long time cleaning the data and finding ideal visualizations, and we feel it turned out well. Furthermore, we got to explore a lot of really cool natural language processing APIs, and learned about niche NLP techniques for topic modeling like LDA. We wish we could've fit more features and complexity in, but for the time we had we're very happy with our work. Finally, we were able to spend some time on creating a clean UI - usually our hackathon projects are a lot more focused on the back end, but we wanted to take the time to display our work intuitively.
What we learned
We learned a ton about natural language processing, as mentioned above. In addition, we actually learned a good amount about application organization from one of our teammates, and were able to explore a good amount of the Azure cloud computing, machine learning, and maps functionality. We've all definitely been able to take away new information from this event - thank you for hosting it!
What's next for T-Mobile Super-Rep Insights
The next feature we wanted to add was a deeper analysis of news articles written about stores, T-Mobile, or related news events. Furthermore, we'd need more time to flesh out an intellgent recommendation system for how to improve on the feedback generated by our analysis. Finally, our stretch goal was to create a auto-diagnoser for common problems faced by customers with their hardware or software, similar to the automatic troubleshooter on Windows.