Inspiration
Growing up in Nigeria, I was struck by how many communities especially in rural areas lack access to even basic diagnostic tools. According to Nigeria Health Watch, only about 30% of health facilities can perform the most fundamental lab tests, and over 70% of Nigerians rely on out-of-pocket payments for care, pushing millions into poverty . This inequity fueled Aidnox’s origin: empowering underserved populations with AI-driven diagnostics on mobile devices, so even without labs or reliable internet, communities can receive timely medical guidance.
What it does
Aidnox is a mobile and web-based diagnostic assistant that uses lightweight AI and LLMs to interpret health symptoms and medical images (e.g. skin conditions, cough scans, X-ray) and provide evidence-based suggestions. It works offline-first, using on-device TFLite models optimized for minimal compute and power, but gracefully syncs with cloud AI when a connection is available ensuring accuracy, continuity, and scalability. It supports multi‑language prompts, simple UI for low-literacy users, and structured output that clinicians or community health workers can act on.
How I built it
From the outset, I designed Aidnox with modular architecture: separate repos for mobile (React-Native), web frontend(React), backend APIs(FastAPI), and AI pipelines. Currently focusing on the destop to see it feasibilty on moving to edge devices.The on-device LLMs is quantized and pruned for performance, while cloud-based LLM services handle more complex inference when online. This edge-and-cloud hybrid ensures fast, low-power inference and reliable cloud fallback. In Nigeria, where network and power are unreliable, this approach aligns with ADTC’s resource‑constrained computing mission.
Challenges I ran into
Pushing the boundaries with LLMs on mobile was both exciting and grueling. Aidnox is among the first few attempts especially in Africa to embed LLMs directly into destop applications and later mobile application(.tflite), I experimented with quantization, model distillation, and edge deployment. Optimizing these models to run reliably on low-end destop devices with <4GB RAM without crashing or overheating was a breakthrough but required weeks of profiling and tuning. I'm still currently working how how is is made more reliable.
Framework dilemma: Choosing between Flutter and React Native became a tough call. And now currently making rigorous testing on the destop implementatoin. Flutter offered speed and powerful UI tooling, but had compatibility issues with some native model interpreters. React Native had broader ecosystem support for native modules, but integrating custom TFLite + ONNX runtimes posed build-time bottlenecks. I eventually opted for a hybrid approach to benchmark both before committing fully to the android fully.
App build and UI rendering failures: I ran into persistent APK build failures, especially when bundling the LLM interpreters and native plugins. Compilation issues broke UI rendering or caused runtime crashes, which delayed testing and made debugging harder. At one point, UI elements would simply not render especially in low-memory emulators due to rendering engine deadlocks. I had torevert to destop application developement to test my LLM, thought I had to reduce asset footprint, and tweak memory constraints manually.
Connectivity variability: Users in rural areas move between full blackout and intermittent 2G/3G zones. I will have to design a robust sync engine with offline caching, conflict resolution, and user feedback loops all while preserving sensitive medical data securely and locally.
Data and trust: In the absence of centralized clinical datasets, I'm currently sourcing multiple open-access medical datasets and to build preprocessing pipelines to validate and standardize them. To ensure real-world relevance, I will be consulting with local clinicians, who can help test the AI outputs and flag inconsistencies. Building trust in AI for health remains an ongoing challenge.
Digital literacy gaps: Many users in Nigeria are not familiar with mobile health tools or medical input prompts. I responded by designing visual-first user flows, simple language prompts, and educational cues to guide user interactions effectively.
Accomplishments that I'm proud of
While the full mobile application is still in development, I’ve made significant strides that confirm Aidnox’s technical and practical feasibility, on destop application and local server inference:
Successfully prototyped and validated the AI backend (image & text-based diagnostic logic) using LLMs and TFLite models, confirming that on-device inference is achievable within tight resource constraints (under 524 MB and usable on 2 GB RAM devices).
Designed and implemented the multilingual architecture and localization system for key interfaces (targeting Hausa, Yoruba, Igbo, and English) to make the application accessible to a broader user base across Nigeria and West Africa.
Developed a scalable, modular architecture across mobile, web, SDK, API, and AI layers, ensuring the solution can grow flexibly as features mature and deployment contexts change. This is the extensive scope of Aidnox I'm working on.
What I learned
Resource constraints can spark creativity not limitations. Designing for low-spec devices forced elegant code, clever sync logic, and performance-first design.
Participatory design is indispensable: meaningful adoption in Nigeria hinges on trust and context-aware UI seen in both cultural sensitivity and linguistic appropriateness. Modular architecture pays off: separating AI, SDK, mobile, API, and web layers allowed me to work iteratively and support scalable deployment paths. Now I'm focusing on the desktop architecture.
What's next for Aindox
Expand pilot scope: partner with NGOs and state health authorities to run larger field tests across multiple states.
Add more diagnostic modules: beyond skin and respiratory symptoms, extend to malaria, anemia, maternal health, and triage workflows.
Introduce clinician dashboard: synchronizing patient history, results, and trend visualizations for community health supervisors.
Engage regulatory and research partners: work with national health bodies to build toward accreditation and more trusted clinical use.
Refine power‑management performance: aim for sub‑threshold compute patterns and adaptive inference scheduling to align with IHS power‑management guidelines.
Built With
- amazon-web-services
- fastapi
- flask
- llms
- postgresql
- pyqt5
- python
- react
- react-native
- superbase
- tailwind
- tensoflow
- vercel
Log in or sign up for Devpost to join the conversation.