# Dynamic Programming - Examples: Computer Algorithms - Checkerboard

Checkerboard

Consider a checkerboard with n × n squares and a cost-function c(i, j) which returns a cost associated with square i, j (i being the row, j being the column). For instance (on a 5 × 5 checkerboard),

5 4 3 2 1 6 7 4 7 8 7 6 1 1 4 3 5 7 8 2 – 6 7 0 – – – *5* – –

Thus c(1, 3) = 5

Let us say you had a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (sum of the costs of the visited squares are at a minimum) to get to the last rank, assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3) can move to (2,2), (2,3) or (2,4).

5 4 3 x x x o

This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as

q(i, j) = the minimum cost to reach square (i, j)

If we can find the values of this function for all the squares at rank n, we pick the minimum and follow that path backwards to get the shortest path.

Note that q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance:

5 4 3 A B C D

Now, let us define q(i, j) in somewhat more general terms:

The first line of this equation is there to make the recursive property simpler (when dealing with the edges, so we need only one recursion). The second line says what happens in the last rank, to provide a base case. The third line, the recursion, is the important part. It is similar to the A,B,C,D example. From this definition we can make a straightforward recursive code for q(i, j). In the following pseudocode, n is the size of the board, `c(i, j)` is the cost-function, and `min` returns the minimum of a number of values:

function minCost(i, j) if j < 1 or j > n return infinity else if i = 1 return c(i, j) else return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j)

It should be noted that this function only computes the path-cost, not the actual path. We will get to the path soon. This, like the Fibonacci-numbers example, is horribly slow since it spends mountains of time recomputing the same shortest paths over and over. However, we can compute it much faster in a bottom-up fashion if we store path-costs in a two-dimensional array `q` rather than using a function. This avoids recomputation; before computing the cost of a path, we check the array `q` to see if the path cost is already there.

We also need to know what the actual shortest path is. To do this, we use another array `p`, a predecessor array. This array implicitly stores the path to any square s by storing the previous node on the shortest path to s, i.e. the predecessor. To reconstruct the path, we lookup the predecessor of s, then the predecessor of that square, then the predecessor of that square, and so on, until we reach the starting square. Consider the following code:

function computeShortestPathArrays for x from 1 to n q := c(1, x) for y from 1 to n q := infinity q := infinity for y from 2 to n for x from 1 to n m := min(q, q, q) q := m + c(y, x) if m = q p := -1 else if m = q p := 0 else p := 1

Now the rest is a simple matter of finding the minimum and printing it.

function computeShortestPath computeShortestPathArrays minIndex := 1 min := q for i from 2 to n if q < min minIndex := i min := q printPath(n, minIndex) function printPath(y, x) print(x) print("<-") if y = 2 print(x + p) else printPath(y-1, x + p)