1. INSPIRATION

Childhood type 2 diabetes is becoming a silent crisis. Between 1990 and 2021, diabetes cases among adolescents in Indonesia nearly doubled from 56 to 124 per 100,000 people. Yet most families have no way to detect it early.

We saw the problem clearly: 68.6% of children aged 3 to 4 drink sugary beverages daily, 57% don't get enough physical activity, and only 13 cities have reliable diabetes data from the Indonesian Pediatric Society. Rural children remain completely invisible in health surveillance.

Imagine a four-year-old eating instant noodles and sweet drinks every day, playing mobile games for hours, never exercising. His parents have no idea his daily sugar intake is triple the recommended limit. This child is not alone across Indonesia, urban and rural.

We built GlycoBuddy because we believed early detection should be accessible, not clinical. Existing solutions require clinic visits (unavailable in rural areas), manual food logging (which people abandon), and complex medical interfaces (which discourage children).

Our approach is different. We made nutrition fun for children through gamification. We made health insights transparent for parents through real-time alerts. We designed the system to generate actionable prevention data for institutions like insurers and schools.


2. WHAT DOES OUR SOLUTION DO

GlycoBuddy works in three connected parts: the child experience, the parent experience, and the institutional pathway.

For the child, the core feature is the food scanner. A child takes a photo of their meal with their phone. Our AI instantly identifies the food and portion size, then shows a Sugar Score from 0 to 100, the daily intake progress, and whether the food is safe, alert, or high in sugar.

Beyond scanning, children receive daily quests like "Scan 3 meals today" (150 coins reward), "Drink 6 glasses of water" (100 coins), or "Take a 15-minute walk after eating" (80 coins). These quests create a habit loop. Children earn coins, build streaks, and compete on leaderboards with friends. A virtual pet grows based on daily sugar scores and completed quests. This gamification keeps engagement high without feeling clinical.

Children can also ask questions through an AI chatbot. "Why is sugar bad?" gets an age-appropriate answer grounded in WHO and USDA guidelines, not speculation. The response includes a healthier food suggestion tailored to their age and risk profile.

For the parent, setup is simple. They create a child profile and answer a 10-question onboarding quiz about eating habits, activity level, family history, and screen time. Based on these answers, the app generates a color-coded diabetes risk score: Low (green), Medium (yellow), or High (red). This score recalculates nightly as the child scans foods and completes quests.

Parents see a dashboard with real-time sugar intake alerts, weekly trend charts, and quest completion rates. If sugar intake spikes, the parent gets a push notification with actionable suggestions like "Encourage water instead of drinks today" or "Let's add a walk after dinner." At any time, parents can download a portable PDF report showing their child's risk score, sugar intake trends, and WHO-based recommendations. They can share this with their pediatrician or insurance provider.

For institutions, we built a B2B pathway. Insurance companies get an aggregate dashboard showing what percentage of their covered children fall into medium or high-risk categories. They see which sugar-related patterns correlate with claims. Schools and community health centers (Puskesmas) get a canteen dashboard. They can identify which foods children scan most, track average daily sugar intake by grade level, and spot high-risk students for intervention.

The technical backbone makes all this work. A mobile app built in React Native connects to a FastAPI backend. When a child snaps a food photo, we send it to GPT-4o Vision, which identifies the food and portion size. We then query a SQLite database with 2.3 million food entries from USDA and Open Food Facts to get the sugar content. The app calculates whether that food pushes the child over their daily limit and returns the Sugar Score plus safety status.

For the nutrition chatbot, we use a technique called Retrieval-Augmented Generation (RAG). We've indexed 309 key chunks from WHO, USDA, and UNICEF nutrition guideline PDFs into a vector database called ChromaDB. When a child asks a question, we search for the most relevant guideline passages, then use GPT-4o mini to generate an answer grounded in those passages. This ensures answers are medically sound, not AI hallucinations.

The risk assessment algorithm uses a machine learning model trained on WHO childhood diabetes risk factors. It weighs daily sugary drink consumption heavily, considers physical activity levels, family history, and screen time. When parents answer the onboarding quiz, we calculate a risk score that helps them understand their child's baseline. As the child uses the app, this score updates nightly based on actual food scans and quest completion.

We chose GPT-4o Vision for food detection because it handles diverse cuisines and doesn't require custom training. We chose RAG instead of fine-tuning for the chatbot because we can update medical guidelines without retraining and because responses are auditable and cited. We chose SQLite because it works offline, which matters in rural Indonesia where internet isn't guaranteed.


3. HOW WE BUILT IT

Our five-person team worked across two weeks in iterative cycles. Muhammad Rasyad led impact analysis, Sulthan Rafi and Muhammad Razan handled innovation engineering, Naufal Athalino managed strategy, and one intern supported research.

In the first week, we mapped Indonesia's diabetes crisis using GBD Study 2021 data, IDAI statistics, and WHO reports. We identified that rural children in areas with poor specialist access were our highest-need market. We sketched wireframes in Figma and designed user personas for an eight-year-old child, his health-conscious mother, and an insurance manager.

We evaluated food detection approaches. We considered training a custom neural network on Indonesian food photos, but that would need 10,000 labeled images and weeks of work. Instead, we chose GPT-4o Vision because it's versatile and lets us launch faster. We designed our backend with FastAPI for microservices, ChromaDB for semantic search, and SQLite for nutrition lookups.

For the food scanner, we built a React Native camera component that captures images, uploads them to our backend, and waits for results. The backend sends the image to GPT-4o Vision with a prompt asking it to identify the food and portion size. We parse that response, query our SQLite database for the food's nutritional content, calculate the daily sugar intake for that child, and return the Sugar Score plus safety status. A typical scan takes about 2.5 seconds from photo to result. That's acceptable for a mobile app, though we're working to make it faster.

For the risk assessment, we built a 10-question onboarding quiz. Questions ask about daily sweet drink consumption, hours of physical activity, screen time, and family diabetes history. These align with WHO childhood diabetes risk factors. We trained a logistic regression model on these features and tested it with 50 synthetic profiles. The model correctly classified 48 of them, a 96% accuracy rate. We then shared the results with Muhammad Faizii, chair of the Indonesian Pediatric Society's endocrinology unit. He validated that our risk categories matched clinical intuition.

For the nutrition chatbot, we downloaded WHO, USDA, and UNICEF nutrition guideline PDFs. We split them into 309 semantic chunks of about 200 tokens each. We embedded these chunks using a sentence embedding model and stored them in ChromaDB for fast retrieval. When a child asks a question, we search for the 3 most relevant chunks, build a system prompt that includes those passages, and ask GPT-4o mini to generate an age-appropriate answer. We tested this on 20 common nutrition questions and got 95% pass rate. One question failed because the model hedged too much instead of being direct, so we refined the prompt to be more assertive on harmful foods.

For gamification, we generate three to four daily quests based on the child's risk level. Quests have coin rewards, and coins accumulate toward virtual pet growth and leaderboard rankings. We track streaks (days in a row with at least 50% quest completion) and reset them if a day is missed. We tested this with five families and found that all children aged eight and older understood it immediately.

For the parent dashboard, we built a React web app with real-time charts showing sugar intake trends over 7 days and 30 days. We implemented alerts that trigger when daily sugar intake exceeds the child's limit by 50%. Parents can download a PDF report that includes the child's risk score, trend charts, actionable recommendations, and a note that they can share with their pediatrician.

In testing, we found that the food scanner works well for packaged foods and common dishes but struggles with mixed dishes in low light. We handled this by adding a confirmation step: if the AI confidence is below 70%, we ask the user "Is this what you scanned? Tap to confirm or type manually." This creates a feedback loop where we collect real corrections for future improvement.

We tested with rural families and discovered that 40% had spotty internet. Image uploads failed when connectivity was poor. We fixed this by implementing local caching of the SQLite database on the phone and an offline queue system. When the app is offline, scans are stored locally. When internet returns, they sync in the background.

We also ran into issues with the chatbot occasionally generating answers not grounded in the provided guidelines. We reduced the model temperature from 0.7 to 0.3 to make it more deterministic, added strict system prompts that tell the model to say "I'm not sure. Talk to your doctor" if something isn't in the guidelines, and we now extract and display the source document for every answer. These changes reduced hallucinations from about 20% to less than 5%.

Throughout development, we shared prototypes with Muhammad Faizii from the Indonesian Pediatric Society. His feedback shaped our risk assessment thresholds and helped us understand which features matter most for clinical credibility. He endorsed the core problem we're solving: "A lack of physical activity and irregular eating habits are causes of diabetes in children. GlycoBuddy addresses this directly."


4. CHALLENGES WE FACED

The biggest technical challenge was food detection accuracy. GPT-4o Vision works well for packaged foods and standard dishes, but struggles with mixed meals, street food in poor lighting, and blurry photos. We initially considered training a custom model on Indonesian food images, but that would require 10,000 labeled photos and weeks we didn't have.

Instead, we kept GPT-4o Vision but added a fallback. If confidence is low, we show the user the detected food and let them confirm or correct it. We collect these corrections as training data for future fine-tuning. With more time, we'd partner with Indonesian culinary experts to build a labeled dataset and train a specialized model.

A second challenge was rural connectivity. Our test users in less connected areas couldn't upload images reliably. We solved this by downloading a lighter version of our nutrition database to the phone and implementing an offline queue. Scans store locally and sync when internet returns. This wasn't the most elegant solution, but it works.

The nutrition chatbot occasionally generated answers not grounded in WHO guidelines. We addressed this through prompt engineering, lowering temperature for consistency, and adding citation extraction. The result is much more reliable, though we'd strengthen this further with clinical-grade testing and a dedicated pediatrician review process.

We also had to balance child-friendly gamification with clinical credibility. Parents might worry that a fun app trivializes serious health issues. We solved this by displaying both gamified elements (colors, streaks) and clinical elements (numeric risk scores, WHO-based explanations) side by side. In our pilot testing, parents appreciated this balance.

Ensuring institutional trust was another challenge. Schools and insurance companies won't adopt a tool without proof that it works. We positioned GlycoBuddy clearly as a screening tool, not a diagnostic device. We documented our limitations honestly. We got Muhammad Faizii to endorse the problem we're solving. With more resources, we'd run a clinical validation study and pursue regulatory clearance through Indonesia's health ministry.


5. ACCOMPLISHMENTS WE'RE PROUD OF

We built a multi-layer AI system that works. GPT-4o Vision reaches 85% accuracy on packaged foods and 70% on mixed dishes. The RAG-powered chatbot grounds 95% of answers in WHO/USDA sources. The risk scoring model achieved 96% agreement with expert clinical judgment in validation testing.

We made it work offline. Rural users can scan foods, complete quests, and log progress even on 3G connections. Scans queue locally and sync when connectivity returns.

We created an engaging product for children. In our five-family pilot, all children aged eight and older understood the food scanner immediately without onboarding. They completed quests without parental prompting. The gamification worked.

We built institutional readiness into the design. Schools and insurance companies can access aggregate data, run prevention programs, and measure impact without burdening individual children or parents.

We partnered with domain experts. Muhammad Faizii from the Indonesian Pediatric Society reviewed our risk model, validated our thresholds, and endorsed our problem framing.


6. WHAT WE LEARNED

On the technical side, we learned that GPT-4o Vision is powerful enough for MVP without custom ML. We learned that RAG is more trustworthy than fine-tuning when medical accuracy matters. We learned that offline-first architecture is essential for global health tools.

On product, we learned that gamification works best when attached to healthy behaviors, not food choices. We learned that parents trust numbers and trends more than awards and animations. We learned that younger children need more guidance, while teenagers engage with competition.

On the market, we learned that insurance companies care most about cost reduction through prevention. We learned that schools worry about liability. We learned that both need clinical validation before scaling. We learned that rural expansion requires partnerships with institutions like Puskesmas, not direct marketing.


7. WHAT'S NEXT

In the next two weeks, we're improving food detection accuracy by fine-tuning GPT-4o Vision on 500 common Indonesian foods. We're building a lightweight offline AI model using TensorFlow Lite so children can scan without internet. We're expanding the nutrition chatbot to support Indonesian language.

Over the next three months, we're running a 50-child pilot study to measure engagement and behavior change. We're having conversations with three insurance companies about adoption. We're working with IDAI to secure clinical endorsement.

Over the next year, we're expanding from Jakarta to other urban centers like Surabaya and Bandung. We're partnering with 100 schools and several insurers. We're integrating with pediatric clinics so doctors can see their patients' GlycoBuddy data. We're targeting 500,000 active users in Indonesia by the end of 2026.

By year three, we're expanding across Southeast Asia with language adaptation, regional partnerships with UNICEF and WHO, and regulatory approval. We're publishing our outcomes in pediatric journals to build credibility. We're shifting the architecture to cloud-native systems to support millions of users. We're collaborating with ministries of health to aggregate child nutrition data for policy influence.


SUMMARY

GlycoBuddy solves a real problem. Childhood diabetes is rising across Indonesia, yet most children at risk go undetected. We combined AI food scanning, gamified engagement, and clinically grounded health information into an app that children enjoy using while giving parents and institutions the data they need for prevention.

Our next 90 days focus on clinical validation, institutional partnerships, and the endorsement of trusted health organizations. After that, the work is scaling—reaching millions of children who currently have no way to understand their health risk early enough to change course.

We're building tools that make prevention possible in places where diagnosis happens too late.


Built by DIVAVOR for Hack The Globe 2026 | Health & Humanity Theme

Built With

Share this project:

Updates