What it does

The robot detects its current location and it then asks the user for the object's latitude and longitude. It would then turn to face that location. The math involved is:

Bearing from point A to B, can be calculated as,

β = atan2(X,Y),

where, X and Y are two quantities and can be calculated as:

X = cos θb * sin ∆L

Y = cos θa * sin θb – sin θa * cos θb * cos ∆L

where theta is the latitude and L is the longitude.

How I built it

I built the script entirely using Python and the libraries that I had to use were gps3 (to receive the coordinates from the receiver) and the pantilthat (to actually turn to face the object). I first designed it locally on my laptop then for testing I would SSH into the raspberry pi and continue to reiterate my design until it had more accurate results.

Challenges I ran into

There was hardware problems that involved lack of GPS signal, router signal, etc. A big software problem that I faced had to with the calibration of the robot. Since there was no compass module it was hard to determine the "starting angle" and then tell it where exactly to move. This involved a lot of messing around with in order to have more accurate results.

What I learned

While I have worked on previous robotics projects, this one of the first times that I have had the opportunity to work with a more mathematical project with GPS coordinates.

What's next for Object Locator with GPS input

I believe that the next big thing for this project would be to implement object detection using YOLOv5. This would allow for the robot to continue following an object once it has locked its sight on it

Built With

Share this project:

Updates