Inspiration

Imagine you're 22, just starting out. You're juggling rent, a new job, maybe student loans. Someone mentions a housing assistance program that could have helped you six months ago, but the application window closed. You didn't know it existed. Nobody told you. Or you're running a small restaurant. You've been doing everything right, or so you thought. Then a labor law changed quietly last quarter, and now you're facing a fine you couldn't have predicted because you didn't know to look.

These aren't stories about bad luck. They're stories about a broken information gap, one that sits between the laws that govern our lives and the people those laws are supposed to serve. Every year, thousands of bills are introduced, amended, and passed across federal, state, and local levels. They're written in dense legal language, buried in government websites, and covered, if at all, by headlines that move on the next day. The people most affected are often the last to find out: young people stepping into a system no one explained, immigrants navigating rules written in a language not their own, small business owners without legal teams, communities that have always been the last to know and the first to feel the consequences. The cost of that gap isn't abstract. It's a missed benefit. A preventable fine. A right you didn't know you had. We built Does It Affect Me? because we believe that gap is finally closable. AI can do what no newsroom, government agency, or nonprofit has the bandwidth to do, read everything, understand it, and make it personal. Not a firehose of legislative updates. Not a 40-page PDF. Just a clear, simple answer to the question every person deserves to ask: does this affect me? Because staying informed about the laws that shape your life shouldn't be a privilege. It should be a given.

What it does

Does It Affect Me? starts with a simple sign-in. You log in with the social media profile of your choice and that's it. No lengthy forms, no questionnaires. Your profile does the talking, building a picture of who you are, where you live, what you do, and what matters to you. From there, our personalization algorithm gets to work, matching incoming bills and legislative changes against your profile to determine what's actually relevant to you. When something matters, you hear about it. Not through an app you have to remember to open, not buried in a newsletter, but as a text message, timed to when the bill drops, written in plain language, and specific to your situation. Don't find it useful? Thumb it down and we learn. Think it's relevant? Thumb it up and we get sharper. Every reaction makes the next message more accurate. And when a two-line summary isn't enough, you can just reply. A natural conversation opens up so you can ask follow-up questions, dig into the details, and actually understand what a bill means for your life before it affects it. The more you use it, the better it knows you. That's the point.

How we built it

We built Does It Affect Me? as a full stack web application. The intake experience is built on a simple frontend where users authenticate via social login, pulling in profile data that seeds their civic persona. On the backend, we ingest live legislative data from public government sources, running each bill through an LLM pipeline that summarizes, classifies, and scores relevance against user profiles. When a match crosses our relevance threshold, we trigger an SMS notification via Twilio, timed to when the bill is active. User reactions, thumbs up or down, feed back into the personalization layer, continuously refining what gets surfaced and how it's framed. When a user replies to a message, they're dropped into a conversational AI thread that can answer follow-up questions, go deeper on implications, and explain the bill in context of their specific situation.

Challenges we ran into

The hardest problems weren't technical, they were judgment calls. Getting an LLM to summarize legislation that is accurate, plain, and genuinely neutral required more iteration than expected. Legislative language is dense by design, and the line between simplifying and editorializing is thin. Personalization from a social profile gives us signals, not certainty. Deciding what is relevant to someone from limited data, without over-reaching, was a constant balancing act. So was calibrating how often to send alerts, too many and people tune out, too few and the product loses its value. Underneath all of it, public legislative data is messy, inconsistent, and slow to update. Building a reliable pipeline on top of that was a challenge we underestimated early on.

What we believe the world would look like if Does it affect me works?

The knowledge gap between the powerful and the everyday person starts to close. A first-generation student finds out about new scholarship legislation before the deadline. A small business owner adjusts her payroll the week a wage amendment passes, not three months after. An immigrant family understands what a new housing policy means for them before they sign another lease. People stop being surprised by the systems that govern their lives. Laws stop being something that happen to people and start being something people can engage with, push back on, and use in their favor. And over time, that awareness compounds. People who understand how policy connects to their daily lives show up differently at the ballot box — not pushed toward a position, but equipped with context. Communities that were historically disengaged, not out of apathy but out of exclusion, start having a seat at the table. A democracy works better when more of the people it serves actually understand it.

Ethical Considerations - or how this can go wrong?

Staying neutral How a bill is summarized, which details are emphasized, what language is used — all of it can nudge a person toward a particular view. We are not here to tell people what to think. Our job is to improve access to knowledge and let people draw their own conclusions. Data privacy The personalization that makes this powerful also makes it sensitive. We will not sell user data, we will not use it beyond the purpose people signed up for, and we will be transparent about what we collect and why. Exploit vs. explore If someone signs up as a freelancer, do we only send them gig economy bills? Or do we occasionally surface something outside their world that could matter to them — a healthcare amendment, a grant they qualify for, an education bill they didn't know existed? The risk is real on both sides. Staying too narrow limits what people discover. Deciding what people should care about beyond what they asked for is a form of influence we take seriously. We want to open doors, not redirect futures. The goal is an informed citizen, not a curated one.

What's next for Does it affect me?

As AI agents become more capable, we see an opportunity to move from informing people to actively helping them act. A few examples of what that could look like: Benefits enrollment: a new assistance program opens up that matches your profile. Instead of just telling you about it, we help you fill out and submit the application. Business compliance : a labor law changes in your state. Instead of a summary, we generate the updated policy document your business needs and walk you through implementing it. Voter registration: an election is coming up and new voting legislation just passed in your area. We confirm you're registered, surface what's on the ballot that affects you, and help you find your polling place. Legal protections : a new tenant rights law passes in your city. We draft the letter to your landlord asserting those rights on your behalf. The vision is simple. Today we answer "does this affect me?" Tomorrow we answer "consider it handled."

Built With

Share this project:

Updates