Hill Climbing: we can just do a lot of random restarts, hill climb until we reach the local peak, and then take the maximum of all iterations.

Taboo Search: it’s taboo to go somewhere you have already been (You could be more efficient and keep track of the places on the graph you have been before, and restart the sample whenever you see you have got to the same place).

Step size: we can start with a large step size and decrease it overtime to better ensure that we reach the global maximum.

Step size that is too big, you may skip the hill you intended to take
If we start from the left, above the shoulder, we will skip to the other side of the hill. Now the gradient is going to go back the other way, so we step back to the direction which we came, end up close to where we started. The algorithm can oscillate and not converge on the answer (infinite loop).

Simulated Annealing

In the beginning, T is high, so e⁰ = 1, we take all random positions offered to us, no matter how bad the new position is.
Like what you read? Give Trung Tran a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.