# How to make a chess engine

Last blog post was about introduction to machine learning. We also came across various applications of it. This one is going to be about the basic concepts of machine learning. We’ll be going through fundamentals of ML and making a chess engine by implementing them.

Before starting with any of these let me mention the person who has inspired the theme of this post. He is truly a winning machine when it comes to playing the game of chess and he has set a number of records which seem impossible to achieve to most of us. His name is, Magnus Carlsen (Go see his incredible stats over here).

Let’s say, you want to be a world champion chess player like Magnus. Then you will have to start with the basics of chess and keep learning new things until you become the world champion. But we know that it won’t be this straightforward. Because, in the game of chess there are numerous possibilities. Therefore, memorizing a list of moves won’t be helpful. Also, just by playing a couple of games against your roommate won’t win you a world championship. You will have to put in efforts over a prolonged period of time to become a world champion.

Good news is, your computer can learn to play the game of chess, get really good at and that too in a considerably small time. It is because, computers have big non-volatile memory storage and they can carry out arithmetic operations swiftly. So, why not make an engine using the concepts of machine learning which will play chess and will get better at it after every game?

We’ll begin by getting some basics clear. Machine learning is a field which solves problems which are logically or computationally impossible to solve just by making use of normal rule defined programming. Therefore, any machine learning problem can be identified by checking these really simple situations. First of all check whether you’re able to come up with specific programmable rules to solve the given problem. If not, then check whether it is small enough to be solved by human efforts. If not, then it is a machine learning problem.

After deciding that certain problem is machine learning worthy, we move on to solve it.

### How to solve a machine learning problem?

#### Step I) Define the problem

The parameters used for effectively defining all aspects of any problem are as follows,

Experience: Learning experience required to carry out the intended task.

Performance: Measuring how well the algorithm performs

Therefore, the problem of chess engine can be described as,

Experience: Number of games played against itself

Performance: % of games won against opponents by playing fair

Defining a problem completely is an important step in solving it, since this is referred by the later stages to determine the accuracy of the results.

#### Step II) Decide its target function

Target function decides what type of knowledge will be learned by the algorithm. The more appropriate the target function is, the more correctly the learning will occur and more accurate results will be produced.

In case of a chess engine, a well-trained engine would choose the best move among all the legal moves. The easiest way of creating such function which I can think of is to assign numerical values to each and every board state (state of board achieved after playing one of the legal moves) and choose the one with highest numerical value.

We will denote the target function as V. Therefore, V(b) will be the target value for an arbitrary board state b. Let us define the target value V(b) for different board states.

a) V(b) = 1.00, given that b is a concluding board state that is won

b) V(b) = -1.00, given that b is a concluding board state that is lost

c) V(b) = 0, given that b is a concluding board state that is drawn

d) V(b) = V(b’), given that b is not a concluding board state and b’ is the best concluding board state that can be achieved starting from b and playing optimally until the end of the game.

Evaluating the target value in case d) is a challenge for our chess engine since this case requires searching ahead for optimal play till the end of the game. In fact, it is practically impossible for us to design a target function for this case. Therefore, we’ll make our learning algorithm to acquire only approximation to the target function. We will use the symbol V’ to refer to the function that is actually learned by our program, to distinguish it from the ideal target function V.

#### III) Approximate the target function

Before approximating any target function we should take into consideration these very important points.

• If the degree of approximation is very high, then it will cause many of the cases to be ignored by the algorithm, resulting into a less intelligent program.
• If the degree of approximation is very low, then the algorithm will go through all the cases despite many being redundant, resulting into high memory and time consumption.

Therefore, our approximation should be something in between these two extremes, making a respectably intelligent program which would eventually operate within time and memory constraints.

To keep the approximation simple, let us represent V’ as a linear combination of different board features. Consider the following board features,

x₁: No. of white pieces on board

x₂: No. of black pieces on board

x₃: No. of squares attacked by white pieces

x₄: No. of squares attacked by black pieces

x₅: Has white castled?

x₆: Has black castled?

Thus, our approximated target function will look like,

V’(b) = wx₁ + wx₂ + wx₃ + wx₄ + wx₅ + wx

Where w through w are weights to be adjusted by the learning algorithm. These weights will determine the importance of their corresponding board feature.

The weights are adjusted during every iteration, reducing the error in the system and thereby making the engine more intelligent. For adjusting weights there are several algorithms available out there. The one which we are going to use is known as least mean squares or LMS weight update rule. It is given as,

w = wᵢ + η * (expected_output(b) − actual_output(b)) * x

Here, expected_output(b) is the output which should be produced by the algorithm for given board state b, actual_output(b) is the output which is actually produced by the algorithm and η is a learning constant.

The higher the value of η, faster the values of weights will change. Therefore, in order to have effective learning, the value of η shouldn’t be too high (which will update the weights with less precision) or too low (which will cause weights to take a lot of time to get updated).

Let us write down code for all the things we’ve learnt so far.

`def get_target_value(weights, board_feat): target_value = 0 for index,weight in enumerate(weights):  target_value = target_value + (weight * board_feat[index]) return target_value`
`def update_weights(weights, eta, exp_op, act_op, board_feat): for index,weight in enumerate(weights):  weight = weight + eta * (exp_op - act_op) * board_feat[index]    return weights`

This is it. By attaching this engine to a GUI which feeds all the required board features to our chess engine at every move will get this engine into action. You can add more board features, change the number of iterations of training and adjust the value of learning constant in order to make a chess engine which can think about more number of possibilities during a chess game.

(Complete code of this chess engine along with an integrated GUI can be found here)

The entire design of any learning problem can be summed up in a simple form. It’s called the final design of a learning problem. It gives information about the problem being solved, how the problem is being solved, training examples being used and examples to test our algorithm.

This GIF shows the final design of the chess engine problem. It can be seen that performance system, critic, generalizer and experiment generator form a closed loop, iterating through which the learning problem gets solved with more precision.

That’s it for this blog post. If you liked it, you can recommend it to others and follow me to get notified whenever I post anything.

Next post is going to be about neural networks in which we’ll see what exactly these amazing things are and how they work. I’ll also teach you to draw pictures worth millions of dollars despite being terrible at drawing. So don’t forget to check out my next post once it is out.

Like what you read? Give Shubham Jayawant a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.