Inspiration
Global supply chains are more fragile than they appear. From the Ever Given cargo ship blocking the Suez Canal to geopolitical shifts in the Red Sea, a single disruption can cost billions. We realised that logistics managers don't just need a map - they need a crystal ball. We wanted to build a tool that doesn't just show the current problems with a supply chain, but informs them how they can best organise their shipments to survive potential chaos.
What it does
Sentinel is an AI resilience engine that optimises supply chains against disruption. It analyses real-time data - weather, geopolitics, and port congestion - to generate multiple transport routes with calculated failure probabilities.
Instead of just finding the fastest path, Sentinel allows users to strategically diversify risk, suggesting exactly how to split supply across different routes to minimise the chance of total failure.
How we built it
We started with extensive research into logistics bottlenecks and the specific pain points of supply chain managers. We then mapped out the necessary data ecosystem, identifying which APIs could provide reliable signals for weather, conflict, and maritime traffic. We spent significant time designing wireframes to ensure the complex data was visualised intuitively, and developing the routing algorithms that weigh cost against risk. The actual development was accelerated using Claude Code, which helped us iterate rapidly on our backend logic and frontend integration.
It uses a wide range of APIs, cleans the data, visualises it on an interactive globe and responds to user queries about more of the data. By allowing users to change supply chain routes, it allows them to decide on the best route to take given a risk forecast.
Challenges we ran into
Our biggest hurdle was data harmonisation. We were pulling live streams from disparate sources - weather APIs, maritime tracking, and news sentiment analysis for geopolitical risk. Normalising these distinct data types into a unified format that our risk algorithm could actually process was a massive engineering challenge that required strict data cleaning and standardisation pipelines.
Accomplishments that we're proud of
We're most proud of the predictive engine itself. It isn't just a static calculator; it successfully synthesises disparate variables to calculate a unified "failure probability" score. Getting the algorithm to dynamically suggest supply proportions based on that risk, rather than just showing a "bad route" warning, was a complex logic problem that feels amazing to have solved.
What we learned
We learned that while the conceptual logic was straightforward - standard pathfinding algorithms are relatively simple - sourcing, cleaning, and processing the data was far more time-consuming than anticipated. We discovered that each external dataset had its own unique idiosyncrasies that required specific programmatic solutions to ensure the integrity of our risk models. This has informed us about our time allocations on future personal projects.
What's next for sentinel
At the moment, our prediction engine is powered by Large Language Models which explore our standardised data. We chose this approach for development speed during the hackathon and for the models' natural ability to provide interpretable descriptions of risk causes. However, since we designed the system with modularity in mind, our next step is to design and train bespoke interpretable statistical models. These would make predictions in a more robust, mathematically rigorous manner, while still allowing us to extract clear language descriptions from their insights.
Built With
- javascript
- langchain
- python
- supabase
- typescript
Log in or sign up for Devpost to join the conversation.