Inspiration
The legal system is confusing. The language is complicated, the documents are dense, and even for someone like me (who’s taken a few law classes), it often feels impossible to understand. Now imagine facing that confusion in a language you don’t fully understand.
In today’s world, especially with ICE raids and rising fear among immigrant communities, that confusion can lead to people being detained, denied rights, or making decisions they don’t understand. Court-appointed translators are overwhelmed. Legal help is hard to find. And for many, the stakes are life-changing.
LexAI was born out of that gap. A tool that helps non-English speakers, as well as the general public, scan a legal document straight from their phones and finally understand what it means (in a simple and clear way) in whatever language they prefer. Whether it’s a landlord notice, a court summons, or a government form, LexAI breaks it down in plain language, instantly.
It’s meant to simplify one of the most complex systems people face daily so they can act with clarity, not confusion.
What it does
LexAI is a mobile-first AI tool that empowers non-English speakers and individuals with low legal literacy to understand complex legal documents easily. By scanning or uploading a file, users receive simplified, translated explanations in their preferred language, along with 24/7 access to an intelligent AI legal assistant.
The app uses AWS Bedrock for document translation and summarization, Amazon Q for conversational legal guidance, and AWS Wavelength to ensure privacy through edge-based data processing. LexAI breaks down legal jargon into clear, actionable insights—putting clarity, access, and dignity back in the hands of those who need it most.
How we built it
I built this project from a UX-first perspective, using Figma for the design. My focus was accessibility: this app is for people in high-stress situations, possibly in legal danger, and they need something fast, clear, and trustworthy.
The UI prioritizes icons over text, clear button placement, and a simple language selection flow. The translation results include both an AI-powered simplified summary and a word-for-word breakdown. There's also a 24/7 AI chat assistant for follow-up questions.
Every screen was designed with empathy, thinking about people who might be scared, overwhelmed, or simply don’t speak the language. My goal was to make an app that feels like a calm, helpful hand when you need it most.
Disclaimer: This app is not yet coded, but here is how I would go about it and how I would use AWS systems and incorporate connectivity.
First, here is how I would incorporate AWS systems.
I would use Amazon Bedrock to power the main translation and document simplification features, leveraging its strength in text and image search to extract and then summarize legal content from scanned or uploaded documents.
Amazon Q would serve as the AI legal assistant, translating complex legal language into plain, easy-to-understand responses for user questions.
For privacy-sensitive use cases like immigration or court-related documents, AWS Wavelength would provide low-latency edge computing while keeping data processing secure and localized, which will ensure trust and privacy.
Now, for actually coding this, I would use React Native or Lovable for the front-end to create a mobile-first experience. And for the backend, I would connect to AWS services through API Gateway, Lambda, and S3 to ensure we have scalable storage and logic handling. While I’m still learning to code, I would start by expanding on my wireframes and user flows in Figma, then integrate AWS Software Development Kits using basic Python or JavaScript (with the help of AI-based coding assistants).
Challenges we ran into
The biggest challenge? Time. I discovered this hackathon three days before the deadline. As I mentioned earlier, I had never worked with generative AI or connectivity infrastructure before, so the learning curve was steep. I spent hours researching, watching videos, and taking notes before I even started designing.
The time limitation made it difficult to ramp up on learning coding from scratch. With little technical background, I had to quickly grasp how AWS services integrate into an app, and figuring out where to begin felt overwhelming at first. Learning the architecture, terminology, and potential use cases while under a tight deadline pushed me far beyond my comfort zone, but it also showed me how much I could pick up fast with the right mindset and tools.
While I didn’t have time to develop a fully coded version, I was able to create a complete UX concept grounded in real AWS capabilities. For me, this wasn’t just about building an app. It was about proving to myself that I could push past being overwhelmed, learn completely new technologies, and still deliver something meaningful and impactful under pressure.
Accomplishments that we're proud of
I'm proud that I built/designed LexAI from concept to polished UX prototype in just three days, despite starting with zero knowledge of generative AI or AWS tools. In that time, I learned the fundamentals of edge computing, built user-centered wireframes from scratch, researched real-world legal pain points, and designed an app with scalable potential and social impact. Most of all, I'm proud that it was built not just as a project, but as a meaningful solution for people who are often overlooked.
What we learned
Honestly, I learned so much. I dove into this project head-on with zero experience in generative AI or edge computing—my background is in UX design, not engineering. I had only used ChatGPT casually and had never even heard of systems like AWS Bedrock or Wavelength before. But after a few intense days of continuously watching YouTube tutorial videos and scrolling through articles on AI, I learned how different AI tools actually serve different roles, and how they can be combined to build real-world solutions like LexAI.
By reading through some use cases, I learned how Amazon Bedrock excels in document summarization and can efficiently search for specific information across large text collections, and how Amazon Q works as a great AI assistant that can simplify complex questions into easy-to-understand answers. I also discovered AWS Wavelength, which allows data to be processed closer to the user for better privacy and speed, which makes it ideal for sensitive legal use cases. (These three were absolutely a perfect combination for what I needed in this project.)
What's next for LexAI: An AI-Powered Translation App for Legal Documents
Next, I'm focused on turning LexAI from a concept into a working product. That means learning the technical implementation, starting with AWS Bedrock and Amazon Q integrations, while getting user feedback and beta testing the product with real users like legal aid nonprofits, translators, and immigrants. The goal: build a trusted, multilingual legal tool that can serve people across languages, borders, and barriers. Long term, I hope to expand LexAI’s capabilities to include voice input and AI-powered legal referrals.
Built With
- figma
Log in or sign up for Devpost to join the conversation.