Inspiration
I came up with Workout InjuryIntel after a frustrating cycle of gym injuries and late-night internet searches. Whenever I felt a weird click in my shoulder or an ache in my knee, I'd scour forums and fitness blogs for answers—only to find contradictory advice from random strangers. This real-life confusion sparked the idea: what if an AI could act like a smart workout buddy (or even a virtual physio), giving me credible, personalized injury advice on the spot? I wanted something faster and more trustworthy than wading through countless websites. The inspiration was simple: no more guesswork or "bro-science" when it comes to workout injuries. Instead, I envisioned a friendly AI system that could understand plain English descriptions of pain (like my right shoulder clicks during overhead press) and respond with clear insights and a plan. That motivation set the tone for the project—make it fast, make it smart, and make it medically credible.
What it does
Workout InjuryIntel is essentially a conversational AI doctor for fitness folks. You open a clean web chat interface and just talk about your issue: for example, I feel a pain in my knees Under the hood, the system parses that description and goes through a reasoning process to figure out what might be wrong. Finally, it gives you a practical week-long action plan tailored to your situation. The response isn't just generic tips—it can include a probable diagnosis (in non-scary terms), some guidance on whether you should rest or see a real doctor, and specific rehab exercises or stretches to do over the next week. I made sure the tone of the advice feels supportive and informative, almost like having a personal trainer + physiotherapist chatting with you.
One of the coolest parts is that the AI doesn't just make things up off the top of its head. It actually performs a mini research step: for example, checking reputable sources or a knowledge base about "lower back pain deadlifts" before finalizing its answer. This means the advice comes with a bit of evidence or at least aligns with known medical info (no more old wives' tales about how "just do more crunches" will fix your back). The end result is a snappy conversation where you describe your pain and get a rich, structured answer: what's likely going on, why it's happening, and what to do about it in the coming days. It's like having an expert that not only diagnoses you, but also hands you a recovery game-plan.
How I built it
Figure is attached: High-level system architecture of Workout InjuryIntel's multi-agent pipeline. The system is built as a web application UI on the front-end and a Flask backend server. When a user types in their injury description, it goes to the Flask backend (the "brain" of the app, shown as the central box in the diagram). There is a Dynamic Orchestrator that decides which expert AI agents need to be involved for that query, design consists of AI as a team of specialized agents rather than one monolithic model. For example, one agent focuses just on parsing the user's input (understanding which body part, what kind of pain or motion is involved, etc.). Another agent is in charge of analyzing the workout form or movement described (e.g., overhead press form affecting the shoulder). Then I have a diagnosis agent that suggests what injury or issue is most likely, given the parsed info. Finally, there's a "prescription" agent that actually composes the week-long action plan for the user.
Crucially, Dynamic Orchestrator agent acts like the conductor of this orchestra – it doesn't always run every single agent blindly. It looks at the user's query and can skip certain steps if they're not needed, making the whole process more efficient. (In early prototypes, the pipeline was fixed and always ran all agents, which was overkill for simple questions.) built in a safety net for unclear queries: a Conversation Manager agent. If the user's input is ambiguous or lacking details, this agent will politely ask clarifying questions (just like a real doctor might say "Wait, can you show me exactly where it hurts?"). This addition made the system feel much more interactive and realistic during testing.
All these agents communicate and pass data to each other through the Flask backend. The heavy lifting is done by a large language model – essentially the AI "brain" powering each agent's reasoning. Deployed NVIDIA's 8-billion-parameter LLM to handle the natural language understanding and generation. Originally, attempted to host this model on AWS SageMaker, thinking it would simplify scaling, set up an API Gateway and even tried some Lambda functions to call the model. However, ran into a lot of headaches with that approach (everything from long cold-start times to permission nightmares with AWS roles). Halfway through, pivoted and moved the whole AI backend to AWS EKS (Elastic Kubernetes Service). This was a game-changer: on EKS The Flask app orchestrates calls between the agents and the core LLM (now running on a GPU in the EKS cluster), and the front-end continuously polls for the analysis results to update the chat.
On the research side, implemented a simple retrieval mechanism so the AI can search our knowledge base and even the web. For instance, when the diagnosis agent suspects an ACL sprain, the prescription agent might call a "KnowledgeBaseTool" or "WebSearchTool" (as shown in the diagram) to find if there are recommended rehab exercises or recent studies about ACL injuries. The agent then weaves that info into the advice, often providing a reference (I had a sidebar in the UI for "References" where any sources or articles would be listed). Felt this was important for credibility – it's not just making stuff up; it tries to back its suggestions with facts or at least check them against external data.
Challenges I ran into
Building Workout InjuryIntel in a short timespan was not smooth sailing. Major hurdle was dynamic orchestration. Initially, the pipeline was hard-coded: the user input would go through Agent A, then B, then C, etc., every single time. I realized this was both slow and sometimes unnecessary. Implementing the Planner (Dynamic Orchestrator) to conditionally run agents was challenging because it introduced a lot of branching logic. I had to ensure that if an agent was skipped (say the user's query was already very clear so no Conversation Manager needed), the rest of the system still flowed correctly. Debugging that logic at 3 AM was… fun, to put it mildly.
AWS deployment was a whole saga in itself, it was time consumer so switched to EKS.
On the front-end side, a challenge was designing a clean UI/UX for a rather complex system. I didn't want the user to feel that complexity. The user just sees a chat box and some status indicators for the agent pipeline.
Accomplishments that I'm proud of
I'm proud to have built a system that doesn't just talk like an AI — it reasons like a human expert. Workout InjuryIntel can take a plain-English description like "I feel a sharp pain in my right knee after squats" and turn it into a structured, evidence-backed diagnosis with a personalized recovery plan — all in seconds.
From a technical standpoint, the biggest achievement was completing an end-to-end multi-agent pipeline — from natural language parsing to reasoning, retrieval, diagnosis, and action planning — fully orchestrated in real time.
Getting the Planner Agent to dynamically decide when to skip or re-run agents was another proud milestone, transforming the system from a rigid sequence to a context-aware reasoning engine.
I'm also proud of the practical impact this app could have: providing early guidance to athletes and fitness enthusiasts who don't always have access to a physical therapist.
What I learned
This project taught me that building an agentic AI system is not just about chaining models together — it's about orchestrating intelligence.
I learned:
How to balance autonomy and structure across multiple reasoning agents.
How to deploy large language models efficiently on AWS EKS using NVIDIA NIM microservices.
The importance of retrieval grounding, combining local PDFs and web search for medical validation.
How conversational flow (via the Conversation Manager) shapes user trust — a clear reminder that good UX is as important as good AI.
It was also a lesson in cloud deployment reality — dealing with IAM restrictions, endpoint permissions, and quota limits required as much problem-solving as coding.
What's next for Workout InjuryIntel: AI-Powered Injury Diagnosis Assistant
The next evolution of Workout InjuryIntel will focus on visual intelligence and personalization. I'm working on integrating computer vision to analyze workout form from video clips, allowing the AI to detect movement issues (like valgus knee collapse or shoulder instability) automatically.
I also plan to:
Expand the knowledge base with verified sports medicine research papers and physiotherapy guides.
Add voice input for hands-free use during workouts.
Create a mobile-friendly version for instant access at the gym.
New chat in the same session is non functional work so that will be improved in version 2.0.
My long-term vision is simple: To make AI a trusted companion in injury prevention and recovery — helping both patients and physical therapists make smarter, faster, and safer decisions.
Log in or sign up for Devpost to join the conversation.