Inspiration

I was inspired by my sister, who is a lawyer working at a constitutional law nonprofit in Washington D.C. She describes the biggest obstacles in case work as the time and energy spent pouring over past briefs to find relevant information to inform her arguments. By using LLM's, we will be able to save hundreds of hours of time in pouring over documents that could be better used defending the constitutional rights of citizens.

What it does

The app uses RAG techniques to retrieve relevant supreme court cases based on a user's query and feeds this into an LLM that gives the user summaries of these cases and key patterns it can find.

How we built it

We used the Intersystems database and populated it with sections from supreme court cases, along with vector embeddings generated by Legal-BERT. We then performed vector search to find portions of supreme court cases that were most relevant to our queries, then used the OpenAI API to generate responses to the user based on the retrieved documents.

Challenges we ran into

The biggest challenge we ran into was creating a useful legal database. This involved scraping a supreme court website for relevant PDF's, using a third party software (Chunkr) to extract blocks of text from the PDF's, and then using Legal-Bert to generate vector embeddings for each of these chunks. Time and cost efficiency were two of our biggest problems, as we had to build a database with hundreds of documents in time to actually build an app around it.

Accomplishments that we're proud of

We are proud of the sleek looking user interface and complex backend and database we were able to build, along with the quality of the model's outputs when prompted with relevant court cases and information.

What we learned

We learned more about the law, building both a front end and back end in an efficient manner from the ground up, effectively delegating and making the best use of our time, and working with large scale databases and AI models to accomplish our goals

What's next for Paralegal

The immediate next step will be expanding our database to include more years of supreme court cases and diversifying to different legal documents. We were limited by time and money for this project, but a larger database will allow for the model to be a more effective assistant

Built With

Share this project:

Updates