Inspiration

We were inspired by the stories and reports from the manufacturing industry, where unplanned machine failures can lead to catastrophic consequences—production lines coming to a halt, delivery deadlines being missed, and repair costs skyrocketing. We imagined the stress and pressure faced by factory managers and workers when everything depends on the smooth operation of heavy machinery. This led us to think about how a proactive, AI-driven approach could make a difference. Our goal was to build a system that could predict failures before they happen, enabling teams to act in time and prevent chaos, ensuring efficiency and stability in critical production environments.

How we built it

We began by configuring Event Hub in Microsoft Fabric to capture and ingest high-frequency telemetry data from CNC machines. We then set up an Event Stream to process and filter incoming data in real time. Using historical data stored in the Lakehouse, we trained a machine learning model in a Fabric Notebook. This model analyzed real-time data to predict potential machine failures. The filtered and enriched data, along with predictions, was loaded into a KQL Database, where we used KQL for advanced querying and data analysis. We utilized Fabric’s Real-Time Dashboard to create a real-time dashboard that visualizes machine health and prediction outcomes. Alerts and maintenance triggers were implemented using Fabric’s Automation and Logic Apps to notify users when a failure threshold was exceeded.

Challenges we ran into

Handling Alerts: Creating a new data activator item for every alert proved to be difficult. We needed to ensure that each alert was processed efficiently without overwhelming the system. KQL in Notebooks: Using KQL in notebooks, particularly in spark mode, presented issues with data overwriting. This required us to find alternative methods to manage data updates effectively. Real-Time Dashboard Alerts: Setting and editing alerts through the real-time dashboard was challenging. The alerts were not easily editable, and changing the email content was not straightforward, necessitating improvements in the alert management system.

What we learned

Throughout this project, we deepened our understanding of Microsoft Fabric's Real-Time Analytics features. We explored how Event Hub efficiently handles streaming data from multiple sources and learned to optimize Kusto Query Language (KQL) for complex real-time data queries. We also gained experience in using Fabric AutoML to build and train machine learning models. By leveraging historical data stored in the Lakehouse, we trained our model to predict various failure types, enhancing our skills in both data engineering and predictive analytics.

Built With

  • automl
  • kql
  • ml
  • pyspark
  • python
  • realtime
  • realtimedashboard
Share this project:

Updates