Two of the members in our group had taken astronomy classes last term, and one of their research proposals was about deep space navigation, and thus the idea was born.
This algorithm we have developed determines the location of the camera in 3D space using true multilateration. The camera can tell the distance from each of the round bodies (planets, or in our demonstration, the colored spheres) using formulas from optics based on focal length and sensor size of the camera. After knowing the distance the camera is from each of the spheres, we can find a point in 3D space that is at the intersection of the surface of these three spheres. Coordinate points for the camera can then be given relative to one of the spheres, in our case we will return the coordinates relative to the earth.
We replicated a 3D model comparable to space for testing purposes. We then took multiple pictures using phone's camera to test the results. We tested and debugged the code multiple times until perfect image recognition was obtained. Then, we developed the web interface from scratch, using html, css, and javascript.
Some challenges we ran into largely involved tweaking image recognition of spheres using openCV. In order to detect the circles we had to perform some image manipulation before the image could be fed into the circle recognition function. This included some blurring and making the image display only hard edges. Another challenge we ran into was that the spheres needed to be differentiated from each other, and we chose to do this using color. The program checks for the color on the inside of the circle to see which sphere it is in order to help with calculation.
Accomplishments that we're proud of include deriving the formula for the distance using optics. This formula needed to be found based off the focal length and other characteristics of the camera. This was not an easy formula to derive, and we had to do some calculation in order to find the values for inside dimensions of the camera.
What we learned is that there is a simple way to determine the location of an object in space. This was also our first time encountering multilateration and optics.
What we have developed is a navigation protocol for CubeSats called CION. Standing for CubeSat Interplanetary Optical Navigation, this system uses image recognition and true range multilateration software to determine its position in space. For example, a camera from the CubeSat will take a picture of its surroundings and this software will use edge detection to search for 3 spheres in the space around it. These spheres are the Earth, Moon, and Sun. The software will then utilize true range multilateration to determine its (x, y, z) coordinates in 3-D space. This method works by finding the position in space that intersects three spheres each equal to the radius the CubeSat can be from each object. We have generated a scaled version of what this appears like in space using a box, paper, and small balls.
What's next for CubeSat Interplanetary Optical Navigation is to be able to factor in more variables and be able to test the program on a real CubeSat.
Log in or sign up for Devpost to join the conversation.