GomoBot

Team member: Yifeng Zhang, Zhiheng Zhang

Baseline goals

(Achieved) Computer rebuild the game successfully with image feedback. The computer can detect where the pieces are with the OpenMV camera and rebuild the map in terminal.

(Achieved) Given a command, robot arm can pick up and place a piece in the right place. The robot can pick up and place a piece to any coordinates in the board.

(Achieved) The system can play the game with a human.

Reach goals

(Achieved)Use a motion planning algorithm to plan a smooth path Verified to be less efficient with larger error in a obstacle free space comparing to direct command with inverse kinematics.

(Achieved)Train the artificial intelligence to be unbeatable Used a cost driven algorithm by formulating the game as a zero-sum game. The computer computes all location’s cost and pick the best one.

(Partially Achieved) A robust system with one-click play

Problem

As stated in the Motivation section, the idea behind the project is to build a complete robotic system that could play Go game with people with camera and robotic arm. However, given the technology behind the project involves the state-of-art technologies that are useful to variable products, we can easily scale the technology and achieve different purposes with the very same system. For example, with the camera and robotic arm, we could place such system in manufacturing factories to increase the productivity of the assembly line. Similarly, such systems could be used in the stores to help with sorting and organising different products by detecting the objects with camera and pick&place them into different sections. The core concept behind the robotic system is that robots could achieve better productivity as they could work nonstop with electricity and a robust system design. In our opinion, ten years from now, robotic systems with a robot arm, a camera and potentially other sensors will be widely used in our everyday life and helping the human society.

To achieve a complete Gomoku robot system, we use the concept of the sense, plan and act predominant robot control methodology. First, the camera takes the input from the board and detect the pieces through blob detection. The microcontroller then transforms the pixel coordinate position to board coordinate position and send it to the computer. Second, the computer takes the input and plan for the next move. Finally, the computer commands the robot arm to pick up a piece and place it at target position. The details and result for each step are going to be presented and explained in the following sections.

System architecture

The system architecture is shown in iamge gallery. The system starts by detecting pieces on the game board. When the OpenMV camera receives the raw image input from the sensor, it first filter the image to make it more contrast and grayscale the picture. Greyscale is being used to increase the possibility of detecting the pieces successfully since all pieces are either black or white. Then we perform a blob detection algorithm on the picture to find all circles in the picture within a given parameters. Color segmentation is used here to detect black and white pieces respectively. Once the pieces’ locations are detected in the pixels’ coordinates, we remap the coordinates to the board’s coordinates by linearly transforming the coordinates with detected board corner coordinates. The board coordinates of the detected pieces will be send to a Mbed STM32 microcontroller and then to the computer through UART. Once we receive the coordinates of the pieces on board at the computer side, we can compute the next step robot should move by solving the cost function for each possible moving in the board. The next move’s coordinate will then be transformed to the robot arm’s coordinate. The computer calculates the paths to place the piece and send the commands to the arm through UART. The arm starts to follow the command and act when it receives command. The whole process will loop until the either player or computer has five pieces in a row and wins the game. (/path/to/system architecture.jpg)

Hardware effort

Openmv

We chose OpenMV CAM M7 to detect board and pieces. It is a small, low cost, microcontroller camera. It has a STM32F765VI ARM Cortex M7 processor running at 216MHz with 512 KB of RAM, 2 MB of flash 640 * 480 camera. Besides, it has a pretty cool IDE which allows us to detect objects in real time.

At the beginning, we tried to use openmv to detect the grid and intersections. However, we found two problems: one was the board reflected light strongly, which prevented us from finding grids. The other was the resolution of the camera is too low. These leaded to wrong blob detection. We tried iPhone to take photo. The high quality camera can solve this problem.

For the light reflection problem, we made a cardboard manually. In the previous case, if a black piece placed in strong light area, we cannot detect . The diffuse reflection of the cardboard helped us to find black pieces. However, the cardboard and white color is too close to do blob detection. We thought this is because the algorithm in openmv is not good enough. (see result in software effort).

Since some features of openmv are not completed, we have to find some untraditional way to solve the problems, for example, how to send images from openmv to PC(See communication section for details).

Communication

As previously mentioned, openmv supports real time detection. However, this also occupies the only serial port. Since openmv company hasn’t released UVC firmware, we cannot read real-time images from openmv. The only way to grab pictures is to reset the camera, then pictures are generated. However, it is unreasonable to reset camera, remove SD card from openmv, insert SD card to PC, process image and put SD back to openmv. Finally, we decided to process the image in openmv and only transfer the pieces coordinates. Again, communication was still a big issue. The solution we proposed is to use one pin port of openmv as a uart port. Coordinates will be transferred from this port to mbed through UART.

Mbed 1768 is used to receive coordinates from openmv and transfer them to PC. Unlike openmv, mbed 1768 has three pairs of serial ports. P10 is used to grab data from openmv. Then, the received data is sent to PC at 115200 baud rate through usb serial.

Hexbot Robotic Arm

The arm used in this project is a prototype to a kickstarter campaign project. It is a four degrees-of-freedom arm with different modular end effectors. The initial idea was to use a 1 dof gripper to pick up pieces and place them. That did not work out due to the round shape of pieces. Therefore, we choose to design a suction cup with a mini pump to provide suction force. The pump will start when the arm is trying to pick up a piece and will stop when it place a piece. To control the robotic arm, we input the world coordinates that we’d like to go to. The microcontroller will then calculate the inverse kinematics of the arm and output the resulted motor angles for each of the motors. The arm will move to the input location by linearly turning all motors from current angle [Θs0,Θs1,Θs2,Θs3] to the targeted angle [Θg0,Θg1,Θg2,Θg3]. One critical issue faced was the reset of the arm. Since the arm is still a prototype, there are problems with the motor sensor encoder. The arm could not be reset to the same position everything we start the program. Therefore, we disabled the automatic reset and used only manual reset which solves the problem.

Software effort

Piece detection

Using camera as the only sensor in the system to detect the position of each piece and the board, we first used OpenCV package to verify the idea. The initial idea is to take advantage of color segmentation and filter out only black and white colors in a picture. With only black and white color parts in the picture, a blob detection function would be efficient enough to generate the desired piece locations. However, we found out that the “white” piece’s color in a real image is influenced greatly by the environmental light when testing the algorithm. Therefore, we took a different approach. First, we import the image into the code in grayscale. Then we perform a blob detection to the picture with minConvexity set to 0.85 and minCircularity set to 0.7. This process will be able to generate the positions of all black pieces in the picture. Next, we inversed the picture by thresholding the color to 200~255 (white) and get rid of all elements with color in range 0~199. Performing a blob detection algorithm on the inverted picture will generate the positions for all white pieces. Two examples of the result from the algorithm are shown in image gallery.

After verifying the concept, we started to test the algorithm on the real sensor in the system, OpenMV.

Openmv has a pretty good IDE. It provides real time detection. However, in the detect model, the small memory limits it function. The max pixel size of openmv is 640 * 480. In detect model, we only can use 160*120. Obviously, the picture quality is poor. Fortunately, the algorithm is good enough to find circles. As we mentioned in Hardware Effort part. We cannot find white pieces. ‘threshold’ controls how many circles are found. Increase its value to decrease the number of circles detected. Furthermore, this value also can be used to filter some noise. We also tried to increase contrast to make the edge of circle more obvious. This method still cannot help find white circle. The GRAYSCALE image was applied to further increase contrast. Unfortunately, white pieces were missing. Note: the black pieces were actually found. The black background hide the black circles. However, from the 4 pictures, we found if the threshold is 2500, all black pieces can be detected. Hence, we propose a method that only black pieces are used from camera. The details are talked about in gomoku algorithm section.

Gomoku algorithm

Gomoku is a board game played with Go pieces. Players place a black or white piece on am empty intersection in turn. If five pieces form an unbroken chain horizontally, vertically or diagonally, the holder wins.

Gomoku algorithm is much simpler than Go. This is because the pieces on board won’t be removed and Gomoku actually is a zero-sum game. If a position is a good move for one player, it is equally good for the other to prevent the opponent to win. Hence, the quality of one move is decided by the chance to form a five pieces chain and prevent the opponent from forming a five pieces chain. Based on this theorem, we can evaluate a space by summing the attack value for both sides. This value can be shared by both sides. The advantage of this method is we don’t need to care whose turn it is. We can use this property to simulate the best move through AI vs AI mode. The algorithm will choose the highest weight win space to put the piece.

Traditionally, Go game desk recognition can be divided into several steps: Picture preparation(converting picture into binary matrix) Searching for intersections and edges of the board Searching for neighbor and beginning of the grid building Completing the grid

Because the piece coordinates are decided by board grid, they have good resolution. In this case, we only need to rebuild the grid matrix according to the piece coordinate. At first, we tried a package from github. This package works very well if we take photos by iPhone. The pieces positions are correct and the matrix can change with the board size. Hence, we wanted to follow this thought to do the project at first.

As we mentioned in Hardware Effort part, the poor resolution of camera cannot detect grids well. Hence, we gave up building intersections matric. The solution now is to find the four corners of the board. Using these four coordinates as reference position. Then, equally splitting the board length to get each intersections coordinates. Also, we defined a small area to check whether there was a piece. For example, after splitting, we knew each intersection position. We checked 4*4 pixels around intersection. If detected circle center is located in this range, we can ensure there is a piece.

The problem that openmv cannot detect white pieces is solved in software. Every time black or white pieces are placed, their coordinates will be appended into their own list. The camera sends all black pieces it finds to PC. The Gomoku game code searches and finds the new coordinates.

Other effort

In this project, we didn't do a lot of machinal job. The slide is designed by ourselves. It holds pieces and whenever the piece at bottom is removed by arm, the rest pieces can drop because of gravity. The stander base is triangle to get better stability. The whole slide is made by cardboard. Actually, this is an extra design for our project. We found there were remaining cardboard after we made Go board. Hence, we decided to make a slide for more reasonable product. At first, some protrusions blocked pieces from dropping. We stuck some tape along the track to reduce friction.

System and performance evaluation

Before we combined the whole system. We validated each pieces. The gomoku algorithm is really smart. We cannot defeat it. The game algorithm takes some risk cases into considerations. For example, if the camera detects a circle not at intersections, it will be discarded. All placed black and white pieces are recorded. New move will be checked whether there is a piece at destination by looking up the pieces list. So far, we haven't found a bug. Usually, openmv can detect all of black pieces on the board. However, the poor quality of camera makes it miss some pieces. For now, our solution is to loop more times to increase the chance of finding a piece. However, it still cannot make sure it is always correct. Hence, the backup is that we can enter the new move piece coordinate manually in case the circle detection has problem. The slide is simple, but it works. It is strong enough to be filled with pieces. And the pieces won’t be stuck after the first piece is removed by robot arm. The robot arm moves smoothly and precisely. It can reach every intersection of the board and put the piece over there. Moreover, the pump ensures arm can grab the pieces. However we move the arm, the piece won’t drop. After combining every pieces and finishing communication settings, the system works well.

Why this project is awesome?

With the development in machine learning techniques, artificial intelligence is beating human beings in many games, art and even medical field. One of the most famous example in the past few years is that Deepmind's Alphago beats human world champion Jie with a self-contained reinforcement learning process without human data, guidance or domain knowledge beyond game rules. However, during the competition, one man was sitting on Alpha Go’s side to help it place the pieces onto the board, making the game more “human vs computer” than “human vs robot”.Therefore, after seeing the video of KUKA robot playing table tennis with former world champion Timo Boll, the idea of playing the Go with a robot arm arises.

Video, github links

GomoBot: https://www.youtube.com/watch?v=bX_L45A_q7U&t=15s GomokuGame: https://www.youtube.com/watch?v=ZjDhwpI7cRQ Github: https://github.com/zzh112119/gomoku_game

Built With

Share this project:
×

Updates