Here is What I am doing Next for Omniguide Making this production ready — and I need collaborators. Right now, OmniGuide works as a proof of concept. But I'm pushing this to production, and I can't do it alone. The technical roadmap: Sub-100ms latency with edge inference (WebRTC optimization, model quantization) Persistent memory architecture using vector embeddings AR integration (Vision Pro, Quest passthrough) Real-time spatial mapping for precision overlays
Looking for engineers who want to build something real:
Frontend wizards (WebRTC, Canvas, AR frameworks) ML/AI engineers (Gemini API optimization, multi-modal fusion) Backend architects (low-latency streaming, distributed systems) Computer vision specialists (real-time object detection, 3D reconstruction)
The vision: Most AI is chat interfaces. We're building the co-pilot for physical tasks — the thing you actually reach for when your car won't start or you're troubleshooting hardware at 2 AM. If you've shipped real products and want to build the future of human-AI interaction, let's talk. This is v1. The platform potential is massive. DM me on Devpost if u liked this project and want to collaborate:]
Log in or sign up for Devpost to join the conversation.