Inspiration

Our group’s diversity and love for tech and hardware produced an elegant idea workflow. We were excited about how a high-quality camera could be remotely controlled, and how image data processing workflows could be exponentially accelerated through the use of third-party software and APIs. These abilities add functionality that greatly expands the use cases of the camera hardware and reduces the workload on the photographer.

What it does

Our proof of concept is twofold:
· We automated the three main tasks of photographers using DSLR camera equipment.
· Besides physically interacting with the camera, we have developed a way to leverage the camera’s excellent image sensor and created a stream that could be used for monitoring, or for remote video control. Here’s how we automated the three main tasks of photographers.

  1. Our program interacts with a Canon PowerShot camera by taking photos while manipulating camera controls as a photographer would. This is done with Canon's CCAPI. Our interactions control shutter release and focusing of the optical lens. This yields a sharper image than attempting future corrections with software.
  2. The image saved to the internal memory card is automatically transferred to a computer storage device, viewed, processed, then is either accepted or rejected. Currently, our processing involves checking to see if types of subjects are present in the image through interfacing with AWS’s Rekognition API. The parameters for keep or reject are user-defined and adjustable for accuracy and content.
  3. Depending on whether or not the desired type of subject is present in the photo, the image is either retained or deleted from the camera’s internal memory card. This allows smaller memory cards to be used and relieves the photographer from having to review inaccurate or undesired images.

Out GUI allows for simple user-controlled operation with the camera, and to use a full-size screen for image review of the kept files. Future developments will implement controls such as real-time exposure controls, and automatic image tracking.

How we built it

Our process began by creating a schematic of sequential processes to perform camera interactions. We wanted a script package that additional planned functions could easily use. We started with a REST client to experiment with controlling the camera through the URL-based API. Next, we wrote individual methods using python to quickly facilitate repeated function calls and, changing controlling the program more efficiently in future versions. We then packaged individual methods into a feature set. Once the backend features were functional, we built a simple web browser GUI to give the user easy controls between the application, and to demonstrate upcoming feature sets.

Challenges we ran into

The most challenging part of building the application was parsing the JSON request results from the camera into a Pythonic data structure that could be examined based on the user’s inputs. We utilized the Python JSON library to do this work, ultimately converting JSON objects from the camera into nested key and value dictionaries inside a list. Because of the RESTful API, only one conversion was needed. Output data remained simple image data and metadata.

Accomplishments that we are proud of

Some dependencies on timing that are not covered in the API. We noticed we needed to inject specific delays in the application to allow for the camera to complete writing image data to the internal memory card before we could read that image from the memory card. Understanding the need for this timing and developing a timing structure that was as fast as possible yet was sure to have the writing process complete was one of our highlights.

What we learned

The photography industry needs this program.
There are many features and applications that this hardware interface allows. Literally, there are hundreds of use cases for our program including

  • Time-lapse photography
  • Carefully timed camera array controlled shots
  • Drones taking preplanned aerial photos.

  • Custom image tagging & notifications

  • We had to keep focused and not let scope creep occur. We are excited to keep developing and implementing additional and more powerful features needed for these and other uses. Implementing open and direct communication with Agile based development and organization made this project fun and effective.

    What's next for FUSION

    We are going to expand our program’s feature set to include on-screen GUI controls that will work from mobile devices as well as desktop environments. We will add much more sophisticated processing algorithms and additional integrations that are more powerful and dynamic. Just one example is that integrating the camera’s live feed with 3rd party hardware targeting and positioning systems. Another feature set makes remote autonomous control of multiple cameras by a single photographer or operator possible. We are currently consulting with professional photographers to develop solutions for their specific needs and to alpha test features to save them more time in production environments.

    Built With

    Share this project:

    Updates