Inspiration
In a world saturated with AI document processing apps, the Morgan & Morgan challenge highlighted the demand for context-specific solutions. We seized this opportunity to innovate within the Legal Tech realm, leveraging existing AI capabilities to address real-life legal challenges. Our project hopes to shape the landscape of legal research, tailoring technology to meet specific needs and revolutionizing the way legal professionals work.
What it does
It handles PDF, DOCX, and XLSX files, ensuring PDF readability. By utilizing Pinecone's vector database, we preprocess documents, embedding and indexing them for optimized querying. This advanced database is then queried using retrieval augmented method, extracting relevant laws and vital facts. The selected text chunks are passed to OpenAI and our tailored prompts, generating insightful, contextually-rich information. These meaningful insights are stored in MongoDB and seamlessly presented in the frontend, revolutionizing how legal professionals access and comprehend case data.
How we built it
We used Flask, OpenAI, pinecone-client and tesseract
Challenges we ran into
Figuring out how to pre-process data to be suited for vector indexing, laying out the structure of the project, coordinating a lot of APIs and processes together.
Accomplishments that we're proud of
We were able to make a basic implementation of what we envisioned that can process documents and ask questions about it to OpenAI
What we learned
What's next for M&M Copilot
Built With
- flask
- next.js
- openai
- pinecone
Log in or sign up for Devpost to join the conversation.