Inspiration

Machine Learning is revolutionizing the world, but only the minority of technically-trained people understand it.

We want to make machine learning more accessible to everyone. Omega Tree is a hardware interface of a decision tree that is fun and easy to understand. The best thing is that you don't need any coding or machine learning experience to operate our device!

What it does

Omega Tree demonstrates the workings of a decision tree with a fun visualization and interactive hardware interface. It allows users to place cards representing particular features in spots on the display to determine the structure of the tree, and try to find what arrangement works best. The project determines where cards are located with a mounted camera and computer vision. It calculates the best cutoff for a particular feature (they are continuous) and subset of the data on the fly. It changes the lights to indicate which leaves on the tree lead to one classification and which lead to the other. The data set we used is the Iris data set as found here. We classify the difference between versicolor (as green) and virginia (as red) flower and leave the setosa out of the data set.

How I built it

The lights are controlled by an Arduino board, which is controlled by a Raspberry Pi board that is also connected to a camera and which does all of the relevant computations. To detect the location of features, we use custom made QR-like cards and OpenCV for image detection.

Challenges I ran into

The hardest part was getting the computer vision to work properly with the tiles. We initially thought of using colored tiles for the features, but realized the lights would probably interfere with that. Then we tried letters and symbols but they didn't work well with our detection methods either. We tried open circles that can be detected with Hough transform, but it wasn't robust enough so we finally settled on the simple block patterns that we use.

There was also a point when lights burned out, lost connections, and nothing worked. We had some wiring issues, and given that we had hundreds of wires in the backend, it was a nightmare to debug. Fortunately, after some deep thought and careful debugging, we were able to pinpoint the error and fix it.

Accomplishments that I'm proud of

Powering through and getting the CV working well despite all of the setbacks was commendable. Piecing together all of the moving parts, from interpreting the camera input with computer vision to the computations for the decision tree, through the Arduino and to the electronic circuit to make the lights light up correctly was a lot of work and was pretty impressive to get it working.

Also, making a product that people from all backgrounds can interact and learn from

What I learned

(From the team member who had never used Python before), Python is painful. The lack of type safety made it extremely difficult to debug code especially when you are passing arrays into an int parameter (oh wait there's no such thing as an int parameter in Python). Also tabs vs spaces are important in an annoying way they never have been before.

Setting up Arduino with Raspberry Pi was pretty cool as well, since we linked low-level hardware all the way to high-level software.

What's next for Omega Tree

As our goal is to expose a larger audience to machine learning, we hope to be able to build an entire suite of hardware-interfacing machine learning devices. In addition, we hope to make our device give users a deeper glimpse into how machine learning actually works. That is, we would like to show more animations of data flow, visualize how data transformations are made, and also give more ability to interact and change how the machine learning model works.

Share this project:
×

Updates