Inspiration

When floods and landslides strike, every minute matters. Yet, most rescue operations still rely on manual scanning of aerial images . Its a slow and error-prone process. At the same time, farmers struggle with inefficient fruit harvesting and pesticide use because they lack real-time terrain data. We wanted to build a single computer vision foundation model that could serve both worlds, helping drones and robots see, decide, and act autonomously.

What it does

Concave Sweep is a computer vision–driven sweep-path planning system that converts aerial or terrain images into robotic navigation commands. It:

  • Detects clusters of human presence or anomalies in disaster zones.

  • Identifies ripe fruits, soil health, and vegetation stress in agriculture.

  • Generates optimized sweep paths and GPS coordinates for robots to act in real time. The system runs locally or through a REST API, allowing any drone or robot to integrate it within minutes.

How we built it

We designed Concave Sweep as an 8-algorithm pipeline combining:

Semantic segmentation for soil vs. vegetation mapping

Clustering algorithms for human/fruit detection

Path coverage optimization using concave boundary recognition

Real-time coordinate generation via a motion planner The model was implemented in Python with OpenCV, NumPy, and custom path-planning logic. We then linked it to a Jetson-controlled robot for live tabletop testing, where the robot autonomously navigated to target zones based on generated coordinates.

Challenges we ran into

Balancing speed and accuracy: Real-time inference required optimizing the sweep algorithms without losing precision.

Mapping pixels to real-world coordinates: Translating 2D imagery into actionable motion paths took extensive calibration.

Hardware constraints: Edge computing on Jetson hardware forced us to compress our CV models for latency improvements.

Data generalization: Ensuring our pipeline worked across both disaster and agricultural terrains required domain adaptation.

Accomplishments that we're proud of

Built a working real-time sweep algorithm that generates optimized robot paths from image input.

Created a robot demo where the robot physically moves toward detected zones on a tabletop map.

Developed a modular API architecture for future integration with drones and autonomous systems.

Designed a multi-domain foundation model capable of adapting between rescue and farming environments.

What we learned

How to integrate computer vision with motion control under tight resource constraints.

The importance of algorithm modularity, allowing reusability across domains.

That bridging CV and robotics isn’t just about detection—it’s about turning insights into movement.

How small latency improvements (even seconds) can scale into massive real-world impact during rescue operations.

What’s next for Concave

Expand testing with real drone imagery and field-scale simulations.

Integrate thermal and LiDAR data for better victim detection in rescue use cases.

Deploy edge-optimized versions for disaster relief organizations and agricultural partners.

Develop a web dashboard for real-time monitoring, path visualization, and team coordination.

Share this project:

Updates