Inspiration I spent three weeks deploying an AI robot that should have taken two hours. The model is trained. The code compiled. The robot didn't move. I had no way to know if the failure was the model, the firmware, or the board. After interviewing TAs and professors at Penn. I realized this wasn't my problem. It was every engineering student's problem.
Moonshot adds a visualization layer across the full AI-to-hardware deployment pipeline. Before you flash, you see how your trained model will behave on your specific hardware. During execution, you get a live view of what the code is doing and what the model told it to do. When something breaks, you can pinpoint whether the failure is in the model, the firmware, or the board: in seconds, not lab sessions.
Interactive simulation prototypes built in vanilla HTML/CSS with no frameworks. UI design system in Figma with a custom brand identity. Financial model stress-tested against real course enrollment data from Penn's ESE 3600 TinyML course. Customer discovery conducted through 25+ interviews with faculty, TAs, and students across Penn Engineering.
The hardest problem was defining the right layer of abstraction specific enough to be genuinely useful for embedded AI debugging, but hardware-agnostic enough to work across Arduino, ESP32, Nicla Vision, and Jetson without locking into one ecosystem.
Received a PWIF Discover Award. Built interactive prototypes that simulate real deployment failures, OOD model inputs, sensor edge cases, and firmware mismatches before any hardware is touched.
Run the Fall 2026 pilot in ESE 3600. Measure time-to-first-deploy. Publish outcome data. Use results to close 3–6 paying courses in Spring 2027. Raise a $50K SAFE, and ship beta software. Then scale through faculty referrals and demos at universities and conferences.
Log in or sign up for Devpost to join the conversation.