I was always amazed by how stable human vision is, and wondered why video from cameras is always too shaky. I realized, a huge part of the human vision system is that the eye is always fixated on one object of interest.
What it does
Unlike other stabilization software that tries to keep the entire frame from shaking, mine mimics the human vision system by using object recognition to find an object of interest (human face, or body, etc) and then make sure that object is 100% stable. Also, the software does not use any angle or acceleration measurements so that means it can be applied to the most rudimentary of filming devices.
How I built it
I used Python with OpenCV only. First it uses a haarcascade to search for faces and bodies as areas of interest. Then it looks at the section of the frame with a certain width around the object of interest and as the camera moves around the video will still only show the section around the object of interest.
Challenges I ran into
Dealing with rotation was an interesting problem. Rather than using an IMU I wanted to see if it would be possible to find out how much a frame was rotated using the image alone (as humans can do). I couldn't find anything online about it so I set out to find a solution. My best success was tracking how key points rotated around the frame and using that to find the angle. But I need more work on that to make it 100% reliable.
Accomplishments that I'm proud of
My rotation algorithm did work pretty well most of the time. The lateral movement stabilization is pretty alright.
What I learned
A lot of OpenCV, and how to ditch an potential solution to a a problem if it isn't going anywhere.
What's next for Equilibrium
Figuring out the rotation problem.