Inspiration

As Computer Science students ourselves, we faced a problem in our learning journey. Wehenever we wanted to understand a concept or try and visualise how it works, our options were extremely limited - either approach your every busy professors, find youtube videos that may be related to the topic but are rigid and don't let you test your own variables and tinker around, or resort to books, which fail to provide an engaging visual aid. As we grew frustrated with this issue, and we spoke to our peers, we realised that almost every student faces this issue, and there's no all in one fix that exists for the same. Thus we built Viz-Lens, an end to end visualisation and concept checker platform that visualises your question for you, allows you to test your own variables and parameters, presents a quiz to check your understanding of the concept, and finally an integrated IDE that lets you code the solution and judges your solution with an AI code checker.

What it does

VIZ-LENS is an AI-powered visualization engine that transforms abstract inputs into interactive understanding. Code becomes step-by-step execution flows Mathematical concepts become dynamic simulations CSV datasets become intelligent dashboards Repositories become explorable architecture diagrams What sets it apart is active learning—users must interact with visualizations, answer concept-based quizzes, and test their own solutions before unlocking the final answer. It’s not about faster answers—it’s about deeper understanding.

How we built it

We built VIZ-LENS as a full-stack AI system combining visualization, reasoning, and evaluation: Frontend: Next.js + Tailwind + Framer Motion for dynamic, interactive UI Backend: Node.js + Express for orchestration and APIs AI Layer: Amazon Bedrock (Claude 3.5 Sonnet) for reasoning, quiz generation, and evaluation Visualization: Chart.js + HTML5 Canvas for rendering dynamic flows IDE: Monaco Editor for in-app coding and validation Storage: DynamoDB for user state, S3 for dataset handling Deployment: AWS Amplify (frontend) + AWS App Runner (backend) The system follows a pipeline: Parse → Structure → Visualize → Interact → Evaluate → Unlock

Challenges we ran into

Designing a system that prioritizes understanding over answers without frustrating users Converting diverse inputs (code, math, data) into meaningful, accurate visual representations Building a real-time feedback loop for quizzes and code evaluation Ensuring the UI remains intuitive despite the system’s complexity

Accomplishments that we're proud of

Successfully built a unified engine that works across code, data, and concepts Implemented a true active learning loop with enforced understanding Enabled real-time execution visualization, not just static diagrams Created a system that aligns AI with cognitive learning principles Delivered a product that is both technically deep and highly usable.

What we learned

The biggest problem in AI today is not access to answers, but lack of intuition AI becomes significantly more impactful when used as an orchestrator, not just a generator Enforcing interaction (instead of instant answers) leads to better learning outcomes Balancing power and simplicity in UX is critical for adoption

What's next for Viz-Lens

Real-time adaptive teaching using live interaction (voice/video integration) Deeper repository intelligence (full system reasoning and debugging flows) Personalized learning paths based on user behavior and performance Expanded domain coverage (system design, ML concepts, business workflows) Collaborative learning features (shared sessions, team understanding tools) Our vision is to make VIZ-LENS the default interface for understanding complex systems, from classrooms to engineering teams.

Built With

Share this project:

Updates