Inspiration

Modern solar farms span thousands of panels, yet tiny faults—cell hotspots, micro-cracks, wiring degradation—can silently reduce output. Manual inspections are slow and expensive, and pre-programmed drone routes can’t adapt to real-world conditions. We wanted to answer one question:

What if solar farms could monitor themselves and request help from drones automatically?

That vision inspired Helios AI—a drone-based autonomous inspection agent that learns from every mission and optimizes operations continuously.

What it does

  1. We learned how PX4 exposes MAVLink interfaces for offboard control, mission planning, and telemetry. Understanding failsafes, arming checks, and flight-mode switching helped us reliably command the drone.

  2. We explored lighting, terrain, sensor plugins, camera pipelines, and asset creation to simulate a realistic solar farm. This taught us the importance of simulation fidelity for AI-driven inspection.

  3. We built agentic reasoning loops where Grok:

a. Interprets panel metadata b. Detects anomalies from live imagery c. Updates its memory (mem0) d. Connectors to underlying custom data storage layer.

This taught us how large models can control long-horizon robotics tasks—very different from traditional rule-based autonomy.

  1. We learned how to stream mission state live, render panel-health maps, and visualize flight paths with WebSockets and server actions.

How we built it

  1. Simulation + Drone Control

a. PX4 SITL for flight physics b. Gazebo/Ignition with a custom solar-farm scene c. MAVSDK Python client for sending missions, waypoints, and camera triggers

  1. Helios AI (Drone Missing Planning, Execution & Memory Layer) It aggregates:
  2. Solar Farm Metric Analysis
  3. Drone Inspection Planner and Execution
  4. Panel metadata
  5. Historical missions
  6. Panel anomaly detections
  7. Real time analytics and streaming

Challenges we ran into

AI has been trained on the backs of the huge amounts of data that has existed in the digital. This shows once we start applying AI to real world problems. Our simulators our weak and doesn't represent the real world appropriately. The variations and corner cases make the best of the agents sweat as the visual-spatial understanding of LLM are limited. Tuning memory granularity so the agent remembered essential insights was challenging.

We walked away with a deep appreciation for how robotics, simulation, and LLMs can combine to create intelligent systems that continually improve in the real world.

Built With

Share this project:

Updates