Inspiration
As developers I spend significant time reviewing code whether for quality, bugs or licensing concerns. Manual reviews are slow, inconsistent and often subjective. With the increasing capabilities of large language models, especially lightweight and precise ones like Perplexity’s Sonar, I saw an opportunity to automate code reviews in a reliable, fast and developer friendly way.
The Perplexity Hackathon offered the perfect chance to explore this idea and bring it to life as an end-to-end tool: a code review assistant powered by the Sonar API.
What it does
Code Review with Sonar is a full-stack web application that lets developers: Paste or write code into a rich frontend. Choose one or more tasks: Code Review (bug detection, suggestions, and diffs) Bill of Materials (3rd-party library listing) License Check (detect licenses used) Select the inference model, with Sonar as the default Submit and receive a detailed multipart response parsed and color coded in the UI for clarity.
How we built it
Frontend: Built with ReactJS featuring a clean UI prompt selection checkboxes model dropdown and formatted response rendering. Backend: Developed using Flask (Python) to handle API calls form structured prompts and process results. Flask is light weight and developer has full control of implementation. Sonar API: Used for all AI inference and reasoning tasks. Requests are dispatched in parallel for efficiency using ThreadPoolExecutor from python
Inference Abstraction: Sonar Model is abstracted into its own module, allowing flexible switching or multi-model comparisons if required. UX: Special formatting for sections like “Improved Code” makes reviews easier to read and apply.
Challenges we ran into
Concurrent task processing: Handling multiple prompts in parallel while managing API latency and preserving task mapping required careful threading and error handling. Prompt tuning: Getting consistently high quality responses for different review types took several rounds of prompt refinement. Frontend formatting: Making the response both readable and visually intuitive especially with multi-part responses required thoughtful design choices. Testing edge cases: Long or incomplete code snippets sometimes needed special handling to avoid confusing output.
Accomplishments that we're proud of
Successfully integrated and scaled task execution using Sonar API in parallel threads. Built a working product with real-time multi-task AI review that feels fast and intuitive. Designed a model agnostic architecture that can easily be extended beyond Sonar or different models from Perplexity Created a developer friendly UI that’s clean minimal and easy to use.
What we learned
How to design and build multi-threaded AI workflows that handle structured prompts and responses. Deep understanding of prompt engineering for focused AI behavior in coding tasks. The strengths and tradeoffs of Perplexity’s Sonar model compared to more general purpose LLMs. Effective API integration techniques and secure handling of API keys.
What's next for Code Review with Sonar
Add support for uploading files or GitHub links instead of pasting raw code. File upload option along with multi-file and zipped/archived file upload for code review. Stream responses live as they are generated (especially useful for longer analyses). Export review reports to markdown or PDF for team collaboration. Integrate with CI/CD pipelines to run Sonar-powered reviews on pull requests. Enhance prompt engineering further to enable security scanning and code style enforcement
Built With
- flask
- javascript
- mongodb
- perplexity-sonar-api
- python
- react



Log in or sign up for Devpost to join the conversation.