Tic-tac-toe with MyCobot — Part 1

Karteek Menda
5 min readDec 25, 2023

Hello Aliens

In this series of blog posts, we’ll embark on a journey to achieve the seamless and efficient completion of the tic-tac-toe game by synergistically harnessing the capabilities of MyCobot and strategically implementing the Minimax algorithm.

Our first step will involve a thorough exploration of the game mechanics of tic-tac-toe. Understanding the foundational rules of the game is crucial for building a solid foundation before incorporating MyCobot and the Minimax algorithm.

Following our grasp of the game mechanics, we will then delve into an in-depth examination of the versatile capabilities of MyCobot. By understanding the range of functionalities that MyCobot brings to the table, we can identify how to leverage its capabilities to enhance the overall gaming experience.

Finally, we will unravel the complexities of integrating the Minimax algorithm into the mix. This strategic algorithm will be broken down into its essential elements, providing readers with a comprehensive understanding of how it contributes to the efficient completion of the task at hand.

Join me as I navigate through each component, step by step, to create a holistic and insightful guide on achieving an optimal and automated tic-tac-toe experience with MyCobot and the Minimax algorithm.

Tic-Tac-Toe

It’s a classic game where two players take turns marking spaces in a 3x3 grid. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row wins the game. If the grid is filled and no player has three in a row, the game is a draw.

Every player marks a square in turn, usually with a single letter “X” for one player and the letter "O" for the other. The goal is to earn three of those points in a row — vertically, in the horizontal direction, or diagonally — before your rival does so

MyCobot

The myCobot 280 series, created by Elephant Robot, represents a line of 6-degree-of-freedom (6-DOF) collaborative robotic arms designed primarily for research, education, science and technology applications, and commercial exhibitions. Tens of thousands of users worldwide have embraced the convenience and efficiency offered by myCobot arms, utilizing them for learning and implementing robotics across various fields. It has a working radius of 280mm and can handle a payload of 250g.

Credits: Elephant Robotics
  1. Collaborative Robot (Cobot) Design: Designed as a collaborative robot, myCobot is intended to work alongside humans, fostering a safe and efficient environment for research, education, and other applications.
  2. Research and Education Focus: Tailored specifically for research and educational purposes, myCobot serves as an ideal platform for learning and experimentation in robotics.
  3. Science and Technology Applications: With its advanced technology, myCobot finds utility in a wide range of scientific and technological applications, contributing to various industries.
  4. Commercial Exhibitions: The robot’s capabilities extend to commercial exhibitions, where it can showcase its functionalities and applications to a broader audience.

Forward Kinematics of myCobot

In 2023, the introduction of the AI Kit (Artificial Intelligence Kit) marks a significant advancement in automation technology. This kit seamlessly integrates visual, positioning, grabbing, and automatic sorting modules to create a comprehensive robotic arm application. Designed to emulate an industrial setting, the AI Kit 2023 revolutionizes manual labor by implementing intelligent sorting and initiating preliminary industrial automation processes.

At the heart of this simulated industrial scenario are five cutting-edge visual algorithms and motion control algorithms for the robotic arm. These algorithms work in tandem, enabling rapid object recognition and precise sorting capabilities. The AI Kit 2023 represents a leap forward in the pursuit of efficient and smart automation within industrial environments.

The 5 vision algorithms are:

● Shape recognition

● Feature point recognition

ArUco code recognition

● Color recognition

YOLOv5 recognition

The initial four algorithms of the AI Kit 2023 focus on image processing and machine vision, leveraging OpenCV algorithms. These algorithms encompass color space recognition, feature point recognition, ArUco code recognition, and shape recognition. Each of these components contributes to the kit’s robust visual capabilities, enabling rapid and precise identification of objects within the industrial environment.

Additionally, the AI Kit 2023 incorporates the YOLOv5 (You Only Look Once version 5) algorithm, recognized as a leading object detection method. YOLOv5 employs Convolutional Neural Networks (CNNs) to predict objects in images swiftly, ensuring rapid detection without compromising on accuracy. As the latest iteration of the YOLO series, version 5 further enhances the AI Kit’s object recognition capabilities, making it a cutting-edge solution for efficient and reliable industrial automation.

Out of all the above vision algorithms, I tried two (color recognition and YOLOv5) to complete this task. Have a look at the videos for each of the individual.

Color recognition:

In this scenario, the AI Kit 2023 operates in an eye-to-hand mode, utilizing a camera and harnessing the power of Python and OpenCV. The system employs OpenCV for color positioning, where it identifies color blocks based on predefined criteria. Once identified, the system frames these color blocks and calculates their relative positions using relevant points within the spatial coordinates of the robotic arm.

The next step involves establishing a set of coordinated actions for the robotic arm, tailored to the specific color and spatial coordinates of the identified objects. The robotic arm then executes these actions, placing the objects in designated areas based on their identified colors. This approach ensures a seamless integration of visual recognition, spatial coordination, and precise manipulation by the robotic arm, facilitating efficient sorting and automation in the industrial setting.

YOLO Image Recognition:

In this application, the AI Kit 2023 employs an eye-to-hand mode, utilizing a camera for image capture. The system utilizes OpenCV to load YOLOv5 model data, facilitating the recognition of image blocks within the captured images. Once identified, the system determines the position of these image blocks within the recognition area, utilizing relevant points for spatial coordinate calculations relative to the robotic arm.

Following recognition and spatial analysis, a predefined set of actions is established for the robotic arm. These actions are tailored to the specific characteristics of the recognized objects. The robotic arm then executes these actions, placing the identified objects into designated areas based on their unique features. This integrated approach seamlessly combines image recognition, spatial coordination, and precise manipulation, showcasing the AI Kit’s ability to automate object recognition and sorting tasks in an industrial context.

Finally, opted to go with the color recognition algorithm, and the setup for this task will be explained in Part 2.

Thanks for reading the article! If you like my article, do 👏 this article. If you want to connect with me on LinkedIn, please click here.

I plan to share additional blog posts covering topics such as robotics, drive-by-wire vehicles, machine learning, deep learning, etc..

Stay tuned.

This is Karteek Menda.

Signing Off

--

--

Karteek Menda

Robotics GRAD Student at ASU, Project Engineer in Dynamic Systems and Control Lab at ASU, Ex - Machine Learning Engineer, Machine Learning Blogger.