Inspiration

We saw AI exploding in Bioinformatics but noticed a dangerous gap: research is moving at light speed, while the ethical frameworks to protect patient data are barely crawling.

What it does

Ethics in LLMs maps the friction between AI power and data safety. It identifies high-risk areas like "Black Box" algorithms and data leakage, providing a blueprint for responsible genomic research.

How we built it

Using Canva, we translated complex data pipelines into a visual narrative. We analyzed how biological data travels from the lab to the model, highlighting specific failure points where privacy is most vulnerable.

Challenges we ran into

It was hard for us to record the video and edit it to be 5 minutes.

Accomplishments that we're proud of

We successfully simplified abstract AI risks into an actionable "Ethics Mind Map." We’re proud of creating a guide that speaks to both data scientists and medical professionals.

What we learned

Bias is mathematical. If your training data isn't diverse, your results won't be either. We learned that ethics must be a "hard constraint" in the model's initial design, not an afterthought.

What's next

We’re looking toward Explainable AI (XAI). The goal is to move from just identifying risks to building "auditable" AI that can explain its medical decisions in real-time.

Built With

  • canva
Share this project:

Updates