We often hear that, due to the inability of the human rescue team to reach confined places, several humans lost their lives. These confined places may include cave, tunnels or places under debris. This inspired us to build an autonomous robot that can assist the rescue team in finding helpless people where the rescue team is incapable of reaching. With that being said, our Alnitak is trying to learn how to accomplish that.
What it does
Alnitak is a 4WD autonomous robot that can trace light from darkness and guide a human who got trapped inside a confined place. It happens that when a robot gets inside a cave, it may lose its connection from the outer control station. In order to overcome that, Alnitak traces light and makes its movement accordingly. It will search for light, and whenever it gets a lock on a source of light, it will move towards it. Meanwhile, it has got a battery monitor too, so that its battery level can be indicated to get an idea, how long it is going to operate smoothly.
How we built it
Alnitak was built using an Arduino, which is the main controller of the robot. Now to sense the light, it uses two photoresistors or LDRs, left and right. These LDRs changes their resistance inversely to the incident light falling on them. With the help of a potential divider, this change in resistance is then converted to proportional voltage, which then can be measured using the ADCs of the Arduino. We did the same procedure for the two LDRs and collected the readings with different ambient light falling on them. After that, we calibrated the robot by measuring the threshold values upon lighting left LDR, keeping the right LDR in the dark comparatively. We found a sweet spot by moving the lighting and adjusting the position. We did the same for the right LDR, too and found the threshold values. We put that inside the code and created the logic for going left, right. And for the forward, we made the light fall on both the LDR and again measured the threshold values of both the LDR simultaneously and put that values in the code. In this way, we created the correct movement of Alnitak. Now comes the calculation part, where we need to measure the voltage of the battery to monitor it. It required a voltage divider again since the battery voltage was supposed to be around 8.11v (when fully charged) which was over the ADC rating of 5v. The voltage divider reduces the maximum voltage by dividing with a factor of 1.9. This divided voltage is now read by the ADC, and the Arduino lights three LEDs indicating the Full charge, Medium charge and Low charge. As the voltage drops, the ADC values drops too, thus lighting the corresponding LEDs, indicating the level of charge inside the battery. The drive part was built using four 300RPM motors that get driven by the L293d motor driver.
Challenges we ran into
The ADCs of the Arduino had a very high-frequency noise, resulting in continuous jumping of the readings. The LDRs were not similar in the sensitivity aspect, which took us a lot of time to calibrate the results. Finally, when connecting the Arduino with the battery, we mistakenly connected the +ve of the battery to the +5v of the Arduino, resulting in a small short and burned voltage regulator of the Arduino. Finally, finding the threshold values of the ADCs for the correct movement of the rover was a bit tedious.
Accomplishment we are proud of
First of all, reducing the ADC noise and smoothing the readings using a simple recursive filter was our first accomplishment. Bending and positioning the LDRs in such a way as to get the same sensitivity from both the LDRs. Replacing and bypassing the Arduino's onboard voltage regulator, and that too working properly, was a great success and a joyous moment. Last but not least, after calibrating the rover optimally and watching it work like a charm, we felt, as Tom Cruise says, "mission accomplished".
Whats next for Alnitak
We are working on increasing the sensitivity and the accuracy of the LDRs with differential amplifiers so that they can track a very tiny amount of light from a long distance (for example, say, in a tunnel, you see an opening). Attaching a servo hand for picking objects. Finally, using a Raspberry Pi to run an ML algorithm to make it a more autonomous drive with GPS functionality.