Inspiration
We built CyberSentinel AI because many real security incidents begin with small mistakes that are easy to overlook, like an exposed API key, a leaked password, or a sensitive snippet pushed to a public repository. As students, we were especially interested in the gap between simple pattern matching and actual threat triage. A lot of tools can find suspicious strings, but understanding whether they are truly dangerous is a much harder problem. That motivated us to build a system that not only collects public OSINT signals, but also helps interpret and prioritize them in a practical way.
What it does
CyberSentinel AI is an automated OSINT threat hunting prototype that searches public sources for potentially leaked secrets and sensitive data, analyzes the findings, and turns the most important ones into actionable alerts. In our current MVP, the main workflow focuses on GitHub code search, analysis, and optional alerting. The goal is to reduce noise, highlight higher-risk findings, and help security teams respond earlier before a public leak turns into a real incident.
How we built it
We built the project as a modular pipeline so each part has a clear responsibility. The backend is organized around a collector, analyzer, notifier, models, pipeline, and handler, which made the system easier to reason about and extend. We also added a demo-ready frontend so the flow is easier to test, explain, and present.
On the implementation side, we used Python for the backend logic and React with TypeScript and Vite for the web interface. The collector gathers candidate content, the analyzer classifies whether it looks like a real threat, and the alert layer can surface important findings. We also tried to keep the architecture realistic for a student-built MVP: simple enough to demo, but structured enough to grow into something more production-ready.
Challenges we ran into
One of our biggest challenges was reducing false positives. Public repositories often contain sample credentials, placeholders, or tutorial snippets that look dangerous even when they are not. So the real difficulty was not only finding suspicious patterns, but also deciding what actually matters.
Another challenge was balancing speed and structure. Since this started as a hackathon-style project, we had to move quickly, but we also wanted the codebase to stay modular and understandable. On top of that, coordinating frontend and backend work at the same time was harder than expected, especially when we were trying to keep the UI, APIs, and deployment flow aligned.
Accomplishments that we're proud of
We are proud that CyberSentinel AI is more than just a concept or mockup. It already demonstrates a real end-to-end workflow: collecting public signals, analyzing them, and surfacing alerts in a usable way. We are also proud that the project includes both backend logic and a demo UI, which makes the system easier to present to judges and easier for users to understand.
Another thing we are proud of is the project structure itself. Even though this is still an MVP, the repository already reflects a clean separation of concerns, which gives us a solid base for future improvements instead of forcing us to rebuild everything from scratch later.
What we learned
We learned that building security tooling is not only about detection, but also about context, prioritization, and usability. A system can find many suspicious things, but if it cannot explain or organize them well, it becomes much less useful in practice.
We also learned a lot about collaboration and engineering discipline. As students building under time pressure, we had to think carefully about architecture, contracts between frontend and backend, and how to make fast progress without losing consistency. That was probably one of the most valuable parts of the entire experience.
What's next for CyberSentinel AI
Our next step is to expand beyond a single public source and support more OSINT channels, such as forums, paste sites, and other open sources where sensitive information might appear. We also want to improve contextual scoring so the system can better explain why a finding is risky and how teams should respond.
On the product side, we want to strengthen deployment, improve alert workflows, and make the platform more polished as a real security assistant rather than only a demo. In the long term, we see CyberSentinel AI becoming a practical tool that helps smaller security teams detect exposed secrets earlier and react faster.
Built With
- docker
- github-rest-api
- python
- react
- react-router
- render
- tailwind-css
- typescript
- vite
Log in or sign up for Devpost to join the conversation.