Rubik’s Cube algorithms for machines — Part 1 of 2 in a quest to understand the Rubik’s Cube
Making sense of IDA* Search and Thistlethwaite’s algorithm to solve the cube.
Being into speedcubing myself (solving a variety of twisty puzzles as fast as possible), I’ve always been fascinated with the 3x3x3 Rubik’s Cube and how one can reduce it from any of the 43 quintillion scrambled permutations down to its single solved state. While humans have developed various intuitive algorithms for speedsolving purposes, with some examples being CFOP and Roux. However, these human-solvable methods often yield far-from-optimal solutions, so a variety of machine-solvable algorithms have been developed with the goal of producing optimal or near-optimal solutions. In part 1 of this mini-series, I will share two machine-solvable algorithms and how they work, and discuss possible improvements to and real-world applications for these seemingly niche algorithms in controlling robotic arms. In part 2, I will go through the construction process of a Rubik’s-Cube-solving robot out of Raspberry Pi computers and LEGO Mindstorms NXT.
What does an optimal solution for a 3x3x3 Rubik’s Cube mean, exactly?
If we consider a move to be any turn of any face of a Rubik’s Cube, the minimum number of moves needed to solve any permutation of the Rubik’s Cube will always be less than or equal to 20. This has been proven using a sizeable amount of computing power, with 20 now known as “God’s Number”. For the purposes of this article, how optimised a solution is will be measured by the difference between its length and God’s Number.
Rubik’s Cube notation
To discuss the algorithms in detail, we must first define a standard notation of the 3x3x3 Rubik’s Cube from which we can define groups. In speedsolving, the most commonly used notation uses the letters R, L, U, D, F, B to describe the right, left, upper, lower, front, and back faces of the cube. Letters by themselves describe a clockwise turn, while an apostrophe following the letter (we call it prime) describes an anti-clockwise turn. Holding the cube with the green face towards the solver and the yellow face up, the turns are described as such:
Common algorithms — Iterative Deepening A* Search
The Rubik’s Cube can be modeled as a graph problem where the nodes are a set of all possible configurations and the edges are between configurations one turn away from each other. We can then use a graph traversal and path searching algorithm to find the shortest path from the node representing the cube’s current state (source node) to the node representing its solved state (goal node). But what graph traversal algorithm should we use?
For the Rubik’s Cube, you might first consider breadth first search (BFS) as it can be used to find the shortest path from the source node to the goal node, because we can reach the goal with the minimum number of edges from the source. However, using BFS becomes less feasible if you consider the number of nodes in the graph (close to 43 quintillion) and the amount of memory BFS would require.
Iterative Deepening A* (IDA*) has an advantage over BFS as it is a depth first search algorithm, thus using less memory, while still remaining optimal. IDA* also uses a more informative cost function using a problem-specific heuristic estimate of the cost to travel from the source to the goal. For this problem, a lower-bound heuristic function is used based on large memory-based lookup tables, storing the exact number of moves required to solve various subgoals of the problem, in this case subsets of cube configurations.
Common algorithms — Thistlethwaite’s algorithm
Thistlethwaite’s algorithm was devised by Mr. Morwen Thistlethwaite, a knot theorist and professor of mathematics. The cube in its scrambled state can be represented as an element of a group containing all cube states solvable by L, R, F, B, U, D moves. We call this group 0, or G0. We want to keep reducing the cube state to simpler groups solvable by a smaller subset of the possible moves available, until the subset of possible moves is reduced to an empty set, meaning that the cube is solved. From group 0, we turn the cube into a position solvable using the moves <L, R, F, B, U2, D2>, G1. The group is then reduced twice into proper subgroups until it can be solved by half turns only, G3. Solving from this position is trivial.
To reduce the cube state to different groups, look-up tables are used at each stage, each one showing a solution for each element in the quotient coset space. The table below shows the number of possible positions in each group and the order of each coset space.
However, this algorithm is non-optimal as there is no guarantee that an element takes the shortest route from a group to its proper subgroup. As such, Thistlethwaite’s algorithm yields a mean solution length of 52.
Consider the number of groups in Thistlethwaite’s algorithm. We can look at reducing the number of groups by going from G0 to G2, then from G2 to G4, skipping the reduction to groups G1 and G3. This can be achieved using the aforementioned IDA* Search algorithm, where we can model the reduction of both G0 to G2 and G2 to G4 as graph problems with their own set of heuristics. This both reduces the number of groups considered in Thistlethwaite’s algorithm and reduces the complexity of the heuristic and cost function in a complete IDA* Search algorithm, achieving a more efficient version of Thistlethwaite’s algorithm while still being far easier to implement than a complete IDA* Search.
There is a caveat to this improvement though: such an algorithm can only yield local optimums. In other words, the algorithm is only optimal between different groups, and may not be completely optimal from G0 to G4. Even so, there are only two solution sets to consider shortest path for, so the overall solution is probabilistically near-optimal.
Real-world applications — Robotic arms
My suggested improvement uses both graph and group theory to model a problem that can be described in both fields, and such problems are not only limited to twisty puzzles. A real-world example I illustrate here is controlling a robotics arm. To visualise higher-dimensional problems, we can first consider an arm with 2 degrees of freedom (DOF).
We can construct a 2-dimensional occupancy grid representing the states achievable by the tip, where the first dimension being θ1 and the second θ2, both discretised to steps of 1 degree. Looking at the arm, we can see that θ2 ≠ 180°, so θ2 ≥ 180° for all values of θ1 cannot be achieved and are represented as shaded areas, or obstacles, on the grid. When the end effector moves from one state to another, we can describe it as a problem where the tip needs to find the optimal path from its current state to the goal. Using the occupancy grid, we can specify the start and goal angles of each joint and find an optimal path between the two points, which then describes the optimal path the end effector should take.
This can be modeled as a problem in Group Theory, where G0 can be defined as all states solvable by <θ1, θ2> and we can reduce this twice into proper subgroups G1 = <θ2> and the empty group G2. We can also use Graph Theory to model this problem, as the grid can be seen as a graph with adjacent cells connected by edges. Using the insights applied in the improved algorithm, we can solve this problem by using a depth-first search algorithm like IDA* Search to iteratively reduce the current state’s group into the next proper subgroup instead of doing a complete search, which reduces the heuristic complexity while still finding a close to optimal solution.
This can be extrapolated to higher dimensional problems, as most robotic arms have 6 DOF. In higher dimensions, we merely have to consider an increase in the number of group reductions involved in the solution, as the depth-first search algorithm at each step only reduces the number of dimensions away from the solution by 1. This results in heuristics far less complex than one constructed for a search algorithm in 6 dimensions, making it easier to implement in large scale industrial applications.