HAL 10000 Logo

Inspiration

Have you ever felt uncomfortable after sitting at your desk for a long time? Have you developed back pain, joint pain, or the dreaded "tech neck"? Articles from the US Dept of Health and Human Services [1] and the Victoria State Gov Dept of Health [2] suggest this could be the result of an increasing prevalent bad habit many of us share: poor posture.

In recent years several studies [3] [4] have drawn a correlation between poor posture and several long and short-term ailments. You may, like us, already be aware of this issue but struggle to combat this habit. While other changes such as buying an ergonomic chair, or an adjustable desk, can help combat some of the same issues as a good posture, it best serves as a supplement rather than a replacement for improving your posture [5].

To combat this, we built HAL 10000, a desktop posture monitor designed to help you monitor and stay accountable with your posture. You might recognise HAL's cousin, HAL 9000, from 2001: A Space Odyssey.

How does HAL work?

When placed to the side of your desk, with a good view of your body, chair and screen, HAL continuously monitors your seating position, giving spoken feedback via the OpenAI ChatGPT-4o Vision Library and Elevenlabs' text-to-speech API.

spinny_transparent_slow

More detailed numerical information on your posture is accessible through our companion app, which also provides handy charts, a comprehensive log, and a weekly leaderboard so you can compete with your friends for the best posture!

How we built it

Hardware : We used a Raspberry Pi 4B with a Raspberry Pi Camera Module v2.1, and the following components:

  • PAM8403 Audio Amplifier board
  • 0.3W 8R Speaker
  • Breadboard
  • Jumper wires
  • Tactile push-buttons (for volume control)

RPi Software : HAL runs a python script which repeats the following cycle every 15-30 seconds:

  1. Takes a photo with the Camera Module
  2. Sends the photo with a detailed prompt which includes posture guidelines and scoring rules to ChatGPT-4o, which scores the posture in the photo across 4 categories and provides brief textual feedback.
  3. The textual feedback to converted to audio in the voice of HAL 9000 through the Elevenlabs API, and scores are sent to AWS to be stored and later displayed in the companion app.
  4. The audio is played to the user.

Enclosure : We build the enclosure out of MDF, black and silver spray paint, wood screws and card. We also used a hinge to make the interior accessible through the back panel, and a small piece of stripboard for detailing on the front panel.

Companion App : The companion app with built using React Native and Expo. The backend is comprised of two AWS services: Lambda (API endpoints) and DynamoDB (Database for storing scores, leaderboard etc.).

3_t

(You can download the .apk file at the bottom of this submission, under "Try It Out")

To show you more about how we built HAL, we wrote The Making of HAL 10000 on Craft.

Challenges we ran into

  • The Raspberry Pi we started with begin to have boot issues for an unknown reason when the camera module as connected. It took a spare SD card, a spare RPi and some time to troubleshoot but we eventually resolved this.
  • We initally used a linear 5k Ohm potentiometer (i.e. a rotary knob) to control volume, but the sounds was mostly inaudible with a sharp increase in volume towards the end of the turn. We thought a logarithmic potentiometer would resolve this issue, but we ended up using two tactile push-buttons instead.
  • The function calling feature of the ChatGPT-4o API was inconsistent, sometimes failing to output the data needed for a complete assessment of posture. We partially resolved this with some basic error handling in python, and we leave a more robust solution as a future improvement on HAL.

Accomplishments that we're proud of

Having never used React Native or the Elevenlabs API, we're proud to have successfully built an integrated AI-powered hardware device and campanion app. We're proud of staying true to the HAL theme, and building our own enclosure to go with it. Finally, we're proud to have built a proof-of-concept tool which can contribute to our collective health.

What we learned

We learned for the first time how create a voice from a sample recording on Elevenlabs, and how to use the API for TTS. We also learned for the first time how to create a mobile app with React Native and Expo, as well as how to handle media (images and audio) via python. Lastly on the software side, we gained experience with AWS and the GPT-4o Vision Library.

With hardware, we learned how to strip and wire a 3.5mm cable through an audio amplifier board to play audio through a speaker. We learned how to use the RPi Camera Module to campture and manipulate images, and - although we didn't use one - we also learned about the different types of potentiometers (linear, logarithmic and reverse-logarithmic).

What's next for HAL 10000 - Desktop Posture Monitor

  • A more compact, possibly 3D-Printed enclosure
  • A complete signup and setup flow
  • Volume buttons incorporated into the enclosure
  • Multi-language support

References

[1] https://newsinhealth.nih.gov/2017/08/getting-it-straight

[2] https://www.betterhealth.vic.gov.au/health/conditionsandtreatments/posture#bhc-content

[3] https://www.tandfonline.com/doi/abs/10.1080/10803548.2020.1827528

[4] https://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.ojs-doi-10_13075_ijomeh_1896_00352

[5] https://www.cuh.nhs.uk/patient-information/seating-and-ergonomics/

Share this project:

Updates