Inspiration

Understanding underwater environments is extremely challenging because acoustic signals are complex, noisy, and difficult for humans to interpret. Traditional machine learning systems can classify targets, but they rarely explain their reasoning or communicate uncertainty. We wanted to build a system that not only predicts what is happening underwater but also helps operators trust and understand those predictions. That vision led to HydroScope AI.

What it does

HydroScope AI converts raw underwater acoustic inputs into structured, explainable intelligence.

  • Accepts sonar/audio recordings.
  • Predicts the most likely target category.
  • Estimates confidence and reliability.
  • Uses Gemini to translate model outputs into human-readable reasoning.
  • Provides safe fallback responses when certainty is low.
  • Enables decision support rather than just classification.

How we built it

We designed HydroScope AI as a modular intelligence pipeline.

Audio data is passed through preprocessing and a CNN-based classification stage to generate predictions and confidence estimates. These outputs are then sent to Gemini, which produces natural-language explanations and operational insights. The backend was built using FastAPI, enabling real-time interaction and easy testing via APIs. We also structured responses so they can be logged, audited, and extended in future deployments.

Challenges we ran into

One of the biggest challenges was bridging numerical model outputs with meaningful human interpretation. Acoustic environments are highly variable, and low-confidence situations are common. We had to design the system to remain stable even when predictions are uncertain. Integrating multiple components — signal processing, inference logic, and LLM reasoning — into a reliable pipeline within hackathon time constraints was another major hurdle.

Accomplishments that we're proud of

We successfully built a working end-to-end prototype that demonstrates how explainable AI can enhance underwater target classification. Instead of stopping at probabilities, HydroScope AI communicates reliability, provides understandable summaries, and avoids system crashes by design. This transforms a research-style model into something closer to a deployable decision-support tool.

What we learned

We learned that accuracy alone is not enough. Trust, interpretability, and resilience are equally important in real-world AI systems. Large language models like Gemini are powerful collaborators that can convert raw outputs into knowledge that humans can act upon.

What's next for HydroScope AI

Next, we plan to incorporate real-time streaming from live sensors, richer environmental context, and interactive what-if simulations. We also aim to improve adaptive uncertainty modeling and create operator dashboards that visualize historical decisions. Our long-term goal is to build a full maritime intelligence assistant powered by AI.

Built With

Share this project:

Updates