Inspiration

The inspiration for this project stemmed from our own experience as customers, but also as students. On the consumer side, we believe that providing feedback as a consumer is essential for building meaningful relationships with service providers like T-Mobile. From our research we observed that T-Mobile lacks a robust feedback system for both negative and positive feedback. As consumers, we believe this disconnect leads to customer dissatisfaction through a lack of direct action. We see this same dissatisfaction as students through our universities service providers, such as the UTD parking system, room booking, food courts, etc. Implementing a system that also ensures the user feels that their feedback is being heard and actively worked on is also important. Studies have shown that, at least with university services, when students feedback is received but never acknowledged it actually causes less satisfaction because they feel that their feedback meant nothing. This is especially important when implementing solutions can take up to several months or more. So having a feedback system that assured the user was also very important to our project. We drew inspiration from existing features on other service provider's websites, such as Amazon's AI powered review summaries, but we aimed to build a project that streamlined communication between the user and the service provider. We also wanted a main focus for our project to be implementing actionable insights for developers. This would enable teams to efficiently interpret and act on feedback through a clear, data-driven solution. We also realized that our solution for T-Mobile's track could also be expanded into other areas of our lives.

What it does

Our product allows users to submit direct quantifiable feedback over any T-Mobile service, ranging from billing, customer service, network stability etc. Their feedback is turned into a ticket that is sent to be reviewed by developers. Users can track their tickets progress in the review process over time. This will prevent users from feeling dissatisfied by ensuring their feedback is being heard. They can also make use of the AI chatbot for any remaining questions they might have. On the developer (or administrator) side, several tools are provided to help identify issues and build effective solutions via a dashboard. A Customer Happiness Index (CHI) is calculated based on feedback forms and online discussions about T-Mobile services, allowing developers to track overall user satisfaction in real time. Additionally, an Active Alerts tab highlights urgent user feedback or critical issues that require immediate attention, ensuring rapid response and continuous service improvement. The dashboard also visualizes CHI data in real time, displaying machine learning confidence and stability trends. Beneath the chart, developers can access an AI assistant that provides recommendations for resolving any active alerts. The interface also includes a direct feedback section where developers can review user comments, ensuring that issues are addressed quickly. Additionally, a social stream feedback system uses web scraping to pull in live discussions from social media about T-Mobile services, offering developers valuable insight into how specific features or updates are being received by the public.

How we built it

We built MagentaMind using a full-stack system that combines a Spring Boot backend and a Next.js frontend. The backend collects feedback from the feedback form and social media, and stores it within a PostgreSQL database. It is then processed through keyword-based sentiment analysis to calculate an overall CHI score. Using WebSockets, the backend continuously streams live updates to the frontend dashboard, where we want developers to view CHI metrics, sentiment trends, and active alerts. The frontend, using a Next.js interface, features real-time visualizations, an AI analysis tool powered by the OpenAI API, and a social media stream.

Challenges we ran into

One challenge we ran into was figuring out how to quantify user happiness. The solution we wanted had to be both meaningful for developers and accurate to overall user sentiment. We also wanted to ensure that the data showed general satisfaction, but we also wanted individual users with lower scores or negative experiences to still have their concerns met. To address this, we implemented the Customer Happiness Index (CHI), which combines direct feedback with online discourse with a separate critical alert system. By separating these two developers can have two separate statistics that could provide actionable insights into user sentiments. Thus we implemented the CHI which took both user feedback and online discourse in tandem with a critical alert system for users who had urgent feedback. Another challenge we faced was connecting with Supabase. Setting up a secure database and synchronizing between out frontend and backend became a very tedious process that took a lot of patience and troubleshooting. We also encountered issues with the real time updates and data consistency, particularly when handling simultaneous submissions. Additionally, we struggled to get our graph visualizations to accurately represent CHI trends. Our initial plan was to use a candle chart but after several unsuccessful attempts to make it work effectively with live data, we ultimately decided to use a line graph. This provided a clearer and more consistent representation of trends over time.

Accomplishments that we're proud of

Some of the accomplishments we are most proud of is our integration of a linear regression algorithm to process different inputs and conclude the customer happiness index. We were also really proud to seamlessly integrate the front end with the back end, and create a modern, clean design. Something we worked really hard on was being able to integrate the customer side with the admin side, and to get live feedback as reviews come in on the admin side, so successfully achieving that made us very proud as well.

What we learned

We definitely learned organizational skills, as through the duration of HackUTD, we realized that the places at which we experienced the most errors or struggle were where we had not planned in enough detail. By taking a step back and planning things out in detail, we were able to resolve issues swiftly, and better avoid issues for future features. This is an important skill we will take away. We also gained much more experience using supabase, a tool which prior to HackUTD, most of us had little to no experience with, but now feel confident using.

What's next for MagentaMind

Over the course of HackUTD, we tried to integrate live web-scraping from various forums, trying to find T-Mobile customer discussions about their satisfaction across all areas of T-Mobile, like connectivity, the app, customer service, and more. We were able to successfully get webs cramping working for youtube, reddit, and twitter, but due to time constraints were unable to integrate it into our final product. This is something we seek to implement in the future. We would also like to implement this live feedback system into physical locations for T-Mobile, like pop-ups or stores. Finally, we want to integrate more authorization in the future, for user log in functionality, to validate user testimony, and to filter out irrelevant or inappropriate responses that may be web scraped.

Built With

Share this project:

Updates