Inspiration
Our inspiration came directly from a conversation we had a while back with a senior executive at SAP, the company behind the creation of Ariba. While existing procurement systems have matured significantly in managing supplier relationships, he pointed out a critical blind spot: companies still lack visibility beyond their direct (tier-1) suppliers. No existing software had such capabilities and that's how we figured out what exactly we wanted to build.
The real risk, however, lies deeper—within tier-2 and tier-3 suppliers. Disruptions at these levels often cascade upward, yet organizations have little to no structured way of identifying or quantifying this risk. This gap between visibility and impact became the core problem we set out to solve.## What it does
How we built it
Our data analysis first involved figuring out what sort of data sources to pull from. We needed to find sources that were freely and readily accessible. After deciding said sources, we needed to rapidly come up with pipelines to pull data from these sources. We used ai agents to draft this code, going for quantity over quality and selecting the useful data sources. Next, we implemented a null filling strategy that would not bias the trained machine learning model while allowing us to use robust mathematical methods for our data analysis. Finally, for risk prediction, we had to choose between multiple options of forecasting models, including Echo State Networks, LSTMs, Transformers, N-HiTS, and our ultimate final choice, TSMixer.
After creating a forecasting model, we created a script that utilizes the model to create a report based on a specified company. This report is passed through the Flask backend to the React frontend, where the report is split between four different tabs.
Challenges we ran into
One of the biggest challenges was identifying and validating such a niche but high-impact problem. Understanding procurement at a deep level, especially beyond tier-1 suppliers, required extensive research and domain exploration.
From a technical standpoint, data was a major hurdle. We had to work with fragmented and inconsistent datasets, handle missing values (null handling), and ensure that the signals we used remained meaningful and coherent.
We also faced limitations while working with AI tools: -Running out of tokens during iterative development -Agents occasionally losing context or overwriting data -Frequent manual rewrites to maintain consistency across components
On the infrastructure side: -Deployment and configuration issues slowed down iteration cycles -Transitioning from local development to cloud (Supabase) required restructuring parts of the system
These challenges forced us to constantly refine both our technical approach and our product thinking.
Accomplishments that we're proud of
We’re most proud of taking a problem that typically requires deep enterprise data access and building a system that can still generate meaningful insights without it.
Instead of relying on exact supplier mapping, which is often unavailable, we created a heuristic-driven approach that infers upstream risk using external signals. This allowed us to make the invisible visible.
We also built a highly polished, intuitive frontend that translates complex supply chain dynamics into clear, actionable insights. The product doesn’t just display data, it tells a story about risk.
Finally, we were able to structure the system into a coherent flow:
-Identify risk -Understand it -Forecast it -Act on it
What we learned
We learned that in complex systems like supply chains, perfect data is not always necessary to generate valuable insights. With the right abstractions and signals, it is possible to model and reason about uncertainty effectively.
We also learned the importance of clarity in product design. Translating deeply technical and abstract concepts into intuitive visuals and narratives is just as challenging as building the underlying system.
Additionally, working with AI-assisted development taught us how to manage context, guide generation effectively, and intervene when automation doesn't deliver. Most importantly, it was about keeping the team together and shipping something incredible within 24 hours.
What's next for Procuris
Going forward, we aim to strengthen our data layer by integrating real-time trade, logistics, and disruption signals to improve the accuracy of our risk estimations.
We are also planning to actively reaching out to professionals in the supply chain and procurement industry to validate our assumptions, refine our models, and better understand real-world workflows. These conversations will help us ensure that Procura aligns with how decisions are actually made in enterprise environments.
In parallel, we plan to: -Incorporate more granular forecasting models -Expand vendor comparison capabilities -Introduce scenario-based analysis to enhance decision-making
Long term, Procura can evolve into a decision intelligence platform that helps enterprises not just understand supply chain risk but actively navigate and optimize it.
Built With
- antigravity
- css
- datagrip
- dataspell
- docker-compose
- flask
- flax
- html5
- javascript
- jax
- neuralforecast
- openxla
- pandas
- plotly
- podman
- podman-compose
- polars
- polygon
- postgresql
- python
- pytorch
- react
- render
- sqlalchemy
- supabase
- tailwindcss
- uv
- vercel
- vite
- zed
Log in or sign up for Devpost to join the conversation.