What it does

Uses Pixy2's Camera and Color Connected Component algorithm to calibrate certain objects. After so, the object can be recognized by the camera and shares its signature label.

How we built it

First, the Pixy2 was set up to recognize three color-distinctive objects via PixyMon. After the Pixy was connected to Arduino Uno, we wrote the code to use I2C to send Pixy2's data to the Arduino. Once that was set, Node was used to read the serial data from the port and an Express server was created to display in Real Time the label of the object recognized by the Pixy2 with socket.io to the website.

Challenges we ran into

Getting to know the hardware. We were unsure of which microcontroller to use and we switched between the ESP32, Raspberry Pi 4 and the Arduino One, so not knowing how to interface the Pixy2 was somewhat of a challenge.

Accomplishments that we're proud of

The majority of the technology we used was brand new to us: from the hardware to the software and ending up with a MVP that applies all these fresh knowledge.

What we learned

Coding for the Arduino Uno and ESP-32, using the Pixy2 and Google's text to speech.

What's next for Pixy2Text

The next step is finishing the Text-to-Speech feature that reads out load the name of the object, as well as the Translate option that would be helpful

Share this project:

Updates