Inspiration

Greater than 90% of U.S. adults daily consume sodium in excess of the 1500 milligrams recommended by the guidelines of the Institute of Medicine (IOM) and multiple other medical associations. This is true despite innumerable scientific and public health documents over decades recommending a restricted dietary sodium intake. The recommendation to limit sodium intake have been made most strongly for populations with established cardiovascular disease including; hypertension, cerebrovascular disease, congestive heart failure, aortic aneurysm and chronic kidney disease.

What it does

The Calorie LifeCam captures images continuously every 30 seconds and compares the images using IBM Watson's visual recognition tool to categorize food images. It can also distinguish between food and non-food. It then pushes the images to a Google Cloud storage bucket. We setup the Google Cloud IoT core as a way to store images on the cloud and we are also able to push telemetry data to the cloud. The user can view these images via a local MQTT server that we created and view the images on a continuous loop. Our LifeCam is also adept at power management. It utilizes a Razor IMU unit to determine the orientation of the camera and if the camera has been lying flat for a sufficient amount of time (2 min) we enter Deep Sleep mode which activates the ESP32's Ultra Low Power (ULP) coprocessor mode which deactivates every internal system except for the RTC (real time clock) which requires only a 6 uA draw to maintain functionality. We come out of ULP mode to sample the IMU every 30 seconds to determine whether the orientation has changed (so that we can permanently exit deep sleep mode). We sample the IMU using UART communication protocol.

Baseline Goals

  • Write image processing library in C
  • Capture and send out images within 30 seconds every 30 seconds
  • Build power management subsystem using a battery

Reach Goals

  • Achieve 24+ hour battery life
  • Interface with Google Cloud IoT Core for storage of images
  • Use Machine Learning Library to classify images of food (IBM-Watson)
  • Design a PCB for the whole system
  • Magnetometer for orientation purposes - (we may want to enable a deep sleep mode if the product is placed face-down).

Challenges We ran into

A big problem we had was right at the beginning where we spent a week trying to use the ArduCam with the esp-32. The problem was that the ArduCam is dependent on Arduino libraries and the Arduino IDE which we were not allowed to use and we spent the entire first week trying to write the necessary camera functionality. We switched to the OV7670 camera module (without FIFO) in the second week and we had much better success there. However, the first images we generated were very saturated and we attributed that to the breadboard and the long wires we had. The PCB provided better functionality but we still had problems with the images and thought the problem might have been the camera module. We re wired and soldered everything on a perf board and the images were finally clear. Another source of challenges we constantly run into was with the ESP-IDF framework. We wanted to code this project using as much bare bones functionality as possible and hence we opted for the raw ESP-IDF framework as opposed to using the Arduino IDE. We ran into different issues regarding some libraries on the updated framework not being updated with the latest framework and spent a lot of time combing through github issues to find suitable fixes.Incorporating the Google Cloud IoT Core provided more challenges than anticipated specifically with library support as well and we had to fix some issues in the libraries as well.

Baseline Prototype

The baseline camera was done on our custom PCB. We had all the functionality we specified in our baseline goals except for the battery system. The Razor IMU was also a part of the system although the Deep Sleep functionality was not yet implemented. We were still having problems with generating the image from the camera at this stage but we had written most of the code we needed for the entire system.

Reach Demo

By the reach demo we had successfully integrated the esp32 with Google's Cloud as well as IBM's Watson. We also designed a casing for the product and had one laser cut. Weirdly we were still unable to generate an image from our camera despite all the other subsystems working which was a shame. The IMU was correctly configured to distinguish between lying flat and straight which was what we needed to distinguish between deep sleep mode and regular operation mode. We decided to give it another shot and fix our project by the public demo day.

Public Demo

We changed our approach and re wired everything on a perfboard as well as changed the configuration of some of the camera pins. This proved to be the difference maker and we were finally able to generate images. With this new found push we went full steam ahead and redesigned the product including adding an indicator LED and embedding a battery in the system to make it a fully wireless wearable device. We were able to shrink the size of the camera from the reach demo and redesigned a new casing as well. It was a great feeling to be able to take a picture of pizza using our camera, displaying the image on a local server and Watson able to recognize the image as pizza via our image server.

What's next for Calorie Life Cam

Now that we've successfully built a fully functional prototype, the next steps would be to actually manufacture a product. This means miniaturizing the device and condensing everything around one mini PCB. There is also some software implementations that need to be done such as building the mobile app and streamlining the image storage process such that each customer can access all their images.

Built With

Share this project:
×

Updates