Inspiration Many real-world decisions today rely on delayed reports, static uploads, or second-hand signals. But when something is happening right now in the physical world, these approaches often fail. TapLive was inspired by a simple but difficult question: How can a system understand whether a real-world event is actually happening right now — and whether it can be trusted? We wanted to explore a new perspective: treating presence itself as real-time data, not just media content. This led us to experiment with AI reasoning over live signals such as timing consistency, session behavior, location changes, and streaming metadata — rather than relying solely on post-hoc analysis. What it does TapLive is a prototype for Real-World Presence Intelligence. It processes live streaming sessions as continuous signal streams and applies AI-assisted reasoning to observe: Whether a session is happening in real time Whether the behavior patterns are internally consistent Whether location and timing signals align with expected physical constraints Instead of focusing on content consumption, TapLive focuses on observability and trust signals around live presence — enabling systems to reason about what is happening, when, and how reliably. For this hackathon, we specifically demonstrate: Live session signal ingestion Real-time AI analysis over streaming metadata Structured presence observability outputs for downstream systems How we built it TapLive is built as a modular, cloud-native prototype using Google Cloud AI services. Key components include: A real-time streaming and session layer for live presence signals Signal aggregation for timing, session continuity, and basic location metadata AI inference pipelines using Google Cloud AI to reason over live signal patterns An observability layer that produces structured, interpretable presence insights The architecture is designed to be: Event-driven Stream-first AI-assisted rather than rule-only This allows reasoning to evolve as signal complexity increases, without tightly coupling logic to any single application. Challenges we ran into Reasoning over live signals instead of static data required a different mindset than traditional AI pipelines Balancing real-time performance with explainable AI outputs Avoiding overfitting logic to one specific use case while still delivering a concrete demo Designing trust signals that remain useful even with limited or noisy data We intentionally avoided heavy assumptions and focused on signal consistency rather than absolute truth claims. Accomplishments that we're proud of Successfully modeled presence as a stream of signals, not just media Integrated AI reasoning into a real-time pipeline instead of post-processing Produced interpretable observability outputs rather than opaque scores Built a foundation that can support multiple real-world scenarios without redesign Most importantly, we demonstrated that AI can assist observing reality in motion, not just analyzing stored data. What we learned Real-time AI systems benefit more from consistency reasoning than from single-signal accuracy Observability is just as important as prediction in trust-sensitive systems Designing AI outputs for human interpretation changes how models are used and evaluated Cloud-native AI tooling makes rapid experimentation with live data feasible This project reinforced the idea that AI systems should help humans understand what’s happening, not just classify it. What's next for TapLive Real-World Presence Intelligence and AI Observability Next steps focus on deepening observability rather than expanding surface features: Richer multi-signal reasoning (time, motion, session behavior) Improved explainability of AI-derived presence insights Extending observability outputs for integration with external systems Exploring additional real-world scenarios where live trust matters TapLive is not about replacing human judgment — it’s about giving systems and people better tools to reason about reality as it unfolds.

Built With

  • modular
  • timing
Share this project:

Updates