What it does

We wanted to create a device which displays text (and potentially images e.g. a clock face) onto spinning LEDS. We also wanted this to interface with voice commands using AVS to display it. Ultimately, even though we had many obstacles along the way and had to adapt a couple of aspects, we managed to achieve our vision, so we consider our project to be a great success.


This is interesting because interfacing an AI based voice recognition software with lights and hardware is cool! It serves both as a useful tool since it is essentially an alexa, and also an entertaining and impressive way to display the information. We can see this used both as an entertainment or also a useful way to give information to deaf people that can talk to alexa but not hear back.

How we built it

To build the circuit, we used a circuit board which we drilled into and forced a drill edge into. This then allowed us to put up all the LEDs, and then held the arduino and the ESP unit with duct tape (like any good engineering project). At first it was very unbalanced, making it very difficult to hold, but using some coins (and more duct tape) we managed to balance it.

Getting the text to synchronize with the spinning speed was the first real challenge. Initially, we planned to use a hall sensor, however the hall sensor was not sensitive enough, and the magnet had to be too close to the spinning bread board to be usable (a couple of hands were hurt trying to get it to work). Ultimately, we decided to go with a photoresistor and a flashlight, since that significantly yielded better, more consistent results. This however, came with the downfall that the text was slightly harder to read since we had to keep shining a light at it. This became even worse when we tried to film/photograph the project. The code for this part is extremely simple, since it is simply an interrupt (using input capture on pin B0) that is executed every time the flashlight is on top of the photoresistor which measures the amount of ticks the last rotation took. We assume that the following rotation will take the same amount of ticks, and thus can synchronize the text at any speed.

To display stuff was simple. We simply had an array (‘display’) that basically divided the 360 degrees into a predetermined number of partitions. The code would constantly look into that array, using the current tick timer and the spinning speed that was calculated to find the correct partition and turn on/off the LEDs correctly.

The next challenge was to get from a string to the correct content on the ‘display’ array. To achieve that we first used a library that had already mapped every ASCII character to a bitmap of LEDs on/off (basically had already drawn every character). Using that map, we then iterate through each character received and add every part of it to a ‘display’. When the string received is too large to fit in the spinner, we add it to the ‘buffer’ array (which is much larger), then every couple milliseconds move every location on the ‘display’ array one index, then add the next LED sequence from ‘buffer’. This creates the scrolling text effect.

Additionally, the text being displayed by the spinner was determined by responses from Alexa. This was very cool as we were able to link an impressive API with the output from our hardware.

For the AVS portion of the project we made some slight modifications. Although configuring Alexa on a Pi is a relatively straightforward process, it would take too long for Amazon to publish our skill to the skill store for general use. Therefore, we instead made use of a PC, which had AVS with Alexa already installed on it, however all other milestones remained the same. The way that the AVS process worked is the following: You would initialize the ‘ESE350-Final-Project’ skill on an Alexa enabled device When prompted you could ask Alexa any series of questions or request: random fact, random question, ask for the time. Alexa would communicate with a Lambda function (built with Node.js) back-end that would capture Alexa’s response to these questions. Then, this lambda function would send this information via a post request to a website we hosted publicly using Amazon EC2 This website (also built with Node.js) would keep track of the most current response and serve a webpage consisting solely of the most recent response represented as a json object (see final presentation for this in action!) Next, the LED spinning assistant came equipped with a Node MCU module, paired with an ESP-8266 wifi chip. This chip was programmed to make a get request to the aforementioned website one every ten seconds. It would then Parse the retrieved JSON Object and retrieve Alexa’s response Send the response to the Arduino using software-serial communication. The NodeMCU came built in with regular serial communication, however it didn’t seem like the two boards were easily compatible through hardware serial communication. Therefore, we utilized the ATMEGA328P-Software-Serial library for this. The way that this library works is that it configures any two pins of your choosing to be faux RX and TX pins. Because we only needed to receive information from the NodeMCU to the Arduino, we configured pin3 to act as an RX pin using UART interrupts and Timer2. After the string is fully sent, the process is done with one iteration! The LED spinning assistant was designed to be fully modularized and so the AVS portion of this project would be able to repeat the preceding steps indefinitely (or at least until the battery ran out).

Finally, we also managed to implement one extra feature. When we ask alexa for the time, the lamda function will return a specific formatted time, that our code will recognize, and instead of just pasting the text, will then show a clock with the pointers at the correct locations. It looks pretty cool!

Challenges we ran into

We had to change our approach for AVS in one way, we realized we would not be able to utilize a Raspberry PI for the project since Amazon was too slow in publishing the skill to the store. Although somewhat disappointing, this was easily navigated around installing AVS on a PC communicating through Alexa this way. However, this provides a natural next step to the project, which is getting our skill publicly published on the skill store. What this would mean is that any Alexa device would be able to send requests and have the ability to display the Alexa response regardless of location! We also pivoted from using a Hall sensor to calculate rotational speed to using a photoresistor, as this proved more accurate. Another major issue, but something we also found a quick fix to was the motor. All the motors we found in Detkin wouldn’t be fast enough to achieve the desired effect, so we ultimately decided to use a drill. Obviously, this was not ideal since it had to be operated and had to be handheld.

Accomplishments that we're proud of

Now that is done we are certainly most proud that we achieved our overall vision for the project, displaying the text, and being able to interact with it wirelessly. On the AVS side, we were really proud of hosting our own personal website for this (as opposed to tunneling using Ngrok, which is what we were previously doing) and being able to have real time communication with Alexa and the arduino.

What we learned

We gained a greater understanding of many things related to embedded systems. On the Alexa portion of the project we understood how EC2, Lambda, IoT, how AVS works, and WiFi communication using an Arduino based product (NodeMCU + ESP8266). On the LED and hardware side of things, we gained a greater understanding of real time system designed, input change interrupts using the AtMega328P, timer control, and understanding the complicated aerodynamics of an Arduino Uno as it flies dangerously and unpredictably across the room.

What's next for Spinning LED Assistant

As a next step we could certainly improve the hardware. Having an actual integrated circuit, where the LEDs and the arduino are soldered together would not only be more practical but also much safer. Additionally, having a motor that could be spun constantly would also be more efficient. This way we could actually set it up so that it could sit on someone's table and constantly display information. We also want to publish our skill to the skill store and use a Raspberry Pi so that this is a more modularized product!

Share this project: