Abracadabra

The GridWorld problem

A reinforcement learning task that satisfied the Markov property is called Markov decision process, or MDP. If the state and action spaces are finite, then it is called a finite Markov decision process (finite MDP).

A particular finite MDP is defined by its state and action sets and by the one-step dynamics of the environment. Given any state and action s and a, the probability of each possible pair of next state and reward, s’, r, is denoted
$$
p(s^{\prime}, r | s, a) \doteq \mathrm{Pr}\{S_{t+1}=s^{\prime}, R_{t+1}=r \ | \ S_{t}=s, A_{t}=a \}
$$
Given that, one can compute anything else one might want to know about the environment, such as the excepted rewards of state-action pairs,
$$
r(s,a) \doteq \mathbb{E}[R_{t+1} \ | \ S_{t}=s, A_{t}=a] = \sum_{r \in \mathcal{R}}r\sum_{s^{\prime} \in \mathcal{S}}p(s^{\prime},r|s, a)
$$
the state-transition probabilities,
$$
p(s^{\prime}|s,a) \doteq \mathrm{Pr}\{S_{t+1}=s^{\prime} \ | \ S_{t}=s, A_{t}=a\} = \sum_{r \in \mathcal{R}} p(s^{\prime},r|s, a)
$$
and the excepted rewards for state-action-next-state triples,
$$
r(s, a, s^{\prime}) \doteq \mathbb{E}[R_{t+1} \ | \ S_{t}=s, A_{t}=a, S_{t+1}=s^{\prime}] = \frac{\sum_{r \in \mathcal{R}}rp(s^{\prime},r|s,a)}{p(s^{\prime}|s,a)}
$$
Almost all reinforcement learning algorithms involve estimating value functions–functions of states (or of state-action pairs) that estimate how good it is for the agent to be in a given state (or how good it is to perform a given action in a given state).

Recall that a policy, $\pi$, is a mapping from a each state, $s \in \mathcal{S}$, and action, $a \in \mathcal{A}(s)$, to the probability $\pi(a|s)$ of taking action a when in state s. Informally, the value of a state s under a policy $\pi$, denoted $v_{\pi}(s)$, is the excepted return when starting in s and following $\pi$ thereafter. For MDPs, we can define $v_{\pi}(s)$ formally as
$$
v_{\pi}(s) \doteq \mathbb{E_{\pi}}[G_{t} \ | \ S_{t}=s] = \mathbb{E_{\pi}}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \ | \ S_{t}=s \right]
$$
Note that the value of the terminal state, if any, is always zero. We call the function $v_{\pi}$ the state-value function for policy $\pi$.

Similarly, we define the value of taking action a in state s under a policy $\pi$, denoted $q_{\pi}(s,a)$, as the expected return starting from $s$, taking the action $a$, and thereafter following policy $\pi$:
$$
q_{\pi}(s,a) \doteq \mathbb{E_{\pi}}[G_{t} \ | \ S_{t}=s, A_{t}=a] = \mathbb{E_{\pi}}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \ | \ S_{t}=s, A_{t}=a\right]
$$
We call $q_{\pi}$ the action-value function for policy $\pi$.

A fundamental property of the value functions used in reinforcement learning and dynamic programming is that they satisfy particular recursive relationships. For any policy $\pi$ and any state $s$, the following consistency condition holds between the value of $s$ and the value of its possible successor states:
$$
\begin{align}
v_{\pi}(s) &\doteq \mathbb{E_{\pi}[G_{t} \ | \ S_{t}=s]} \\
&= \mathbb{E_{\pi}}\left[\sum_{k=0}^{\infty} \gamma^{k} R_{t+k+1} \ | \ S_{t}=s \right] \\
&= \mathbb{E_{\pi}}\left[R_{t+1} + \gamma \sum_{k=0}^{\infty} \gamma^{k} R_{t+k+2} \ | \ S_{t}=s \right] \\
&= \sum_{a} \pi(a|s) \sum_{s^{\prime}}\sum_{r}p(s^{\prime},r|s,a) \left[ r + \gamma \mathbb{E_{\pi}} \left[ \sum_{k=0}^{\infty} \gamma^{k} R_{t+k+2} | S_{t+1}=s^{\prime} \right] \right] \\
&= \sum_{a} \pi(a|s) \sum_{s^{\prime}, r}p(s^{\prime},r|s,a) \left[ r + \gamma v_{\pi}(s^{\prime}) \right], \;\;\; \forall s \in \mathcal{S},
\end{align}
$$
Equation (11) is the Bellman equation for $v_{\pi}$.

Figure 1 (left) shows a rectangular grid world representation of a simple finite MDP. The cells of the grid correspond to the states of the environment. At each cell, four actions are possible: north, south, east, and west, which deterministically cause the agent to move one cell in the respective direction on the grid. Action would take the agent off the grid leave its location unchanged, but also result in a reward of -1. Other actions result in a reward of 0, excepted those that move the agent out of the special states A and B. From state A, all four actions yield a reward of +10 and take the agent to $\mathrm{A^{\prime}}$. From state B, all actions yield a reward +5 and take the agent to $\mathrm{B^{\prime}}$.

grid_world

Figure 1

Suppose the agent selects all four actions with equal probability in all states. Figure 1 (right) shows the value function, $v_{\pi}$, for this policy, for the discounted reward case with $\gamma = 0.9$. This value function was computed by solving the system of linear equations (11).

OK, now let us solve this problem. The first, we need to define the grid world by code.

1
2
3
4
5
6
7
8
WORLD_SIZE = 5
A_POS = [0, 1]
A_PRIME_POS = [4, 1]
B_POS = [0, 3]
B_PRIME_POS = [2, 3]
discount = 0.9
world = np.zeros((WORLD_SIZE, WORLD_SIZE))

This world has 5 by 5 cells, and there are four special cells: A, A’, B, B’. Discount represents the $\gamma $ in equation (11). We know that the agent in the world selects all four actions with equal probability in all states (cells). So we have:

1
2
3
4
5
6
7
8
# left, up, right, down
actions = ['L', 'U', 'R', 'D']
actionProb = []
for i in range(0, WORLD_SIZE):
actionProb.append([])
for j in range(0, WORLD_SIZE):
actionProb[i].append(dict({'L':0.25, 'U':0.25, 'R':0.25, 'D':0.25}))

The actionProb is a list that has five items. Each item represents a row in the grid and it also is a list that has five items that represents a column in corresponding row, that is, each item in a row represents a cell in the grid. In all cells (states), there are four direction could be selected with equal probability 0.25. Then, we’ll define a undirected graph with weights. The node represented the cell in grid. If between two node has a edge then the agent could move between this two nodes (cells). The weight on the edges represents the reward do this move.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
nextState = []
actionReward = []
for i in range(0, WORLD_SIZE):
nextState.append([])
actionReward.append([])
for j in range(0, WORLD_SIZE):
next = dict()
reward = dict()
if i == 0:
next['U'] = [i, j]
reward['U'] = -1.0
else:
next['U'] = [i - 1, j]
reward['U'] = 0.0
if i == WORLD_SIZE - 1:
next['D'] = [i, j]
reward['D'] = -1.0
else:
next['D'] = [i + 1, j]
reward['D'] = 0.0
if j == 0:
next['L'] = [i, j]
reward['L'] = -1.0
else:
next['L'] = [i, j - 1]
reward['L'] = 0.0
if j == WORLD_SIZE - 1:
next['R'] = [i, j]
reward['R'] = -1.0
else:
next['R'] = [i, j + 1]
reward['R'] = 0.0
if [i, j] == A_POS:
next['L'] = next['R'] = next['D'] = next['U'] = A_PRIME_POS
reward['L'] = reward['R'] = reward['D'] = reward['U'] = 10.0
if [i, j] == B_POS:
next['L'] = next['R'] = next['D'] = next['U'] = B_PRIME_POS
reward['L'] = reward['R'] = reward['D'] = reward['U'] = 5.0
nextState[i].append(next)
actionReward[i].append(reward)

The nextState and actionReward are the same as actionProb that we explained earlier.

Now, we could solve this problem by use the equation (11):
$$
\begin{align}
v_{\pi}(s) &\doteq \sum_{a} \pi(a|s) \sum_{s^{\prime}, r}p(s^{\prime},r|s,a) \left[ r + \gamma v_{\pi}(s^{\prime}) \right], \;\;\; \forall s \in \mathcal{S},
\end{align}
$$
Let us jump into the implementation detail.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
while True:
# keep iteration until convergence
newWorld = np.zeros((WORLD_SIZE, WORLD_SIZE))
for i in range(0, WORLD_SIZE):
for j in range(0, WORLD_SIZE):
for action in actions:
newPosition = nextState[i][j][action]
# bellman equation
newWorld[i, j] += actionProb[i][j][action] * (actionReward[i][j][action] + discount * world[newPosition[0], newPosition[1]])
if np.sum(np.abs(world - newWorld)) < 1e-4:
print('Random Policy')
print(newWorld)
break
world = newWorld

The core code is:

1
newWorld[i, j] += actionProb[i][j][action] * (actionReward[i][j][action] + discount * world[newPosition[0], newPosition[1]])

The += represents the first sum notation in the equation (11). If we ensure the current state (cell) and action will take in this world, then the next state and reward also will be ensured. So $\sum_{s^{\prime},r} p(s^{\prime}, r | s, a)$ is equal to 1.

The result as follows:

1
2
3
4
5
6
Random Policy
[[ 3.30902999 8.78932551 4.42765281 5.3224012 1.49221235]
[ 1.52162172 2.9923515 2.25017358 1.90760531 0.5474363 ]
[ 0.05085614 0.73820423 0.67314689 0.35821982 -0.40310755]
[-0.97355865 -0.43546179 -0.35484864 -0.58557148 -1.18304148]
[-1.8576669 -1.34519762 -1.22923364 -1.42288454 -1.97514545]]

We can see the value of all states is the same as the Figure 1.

Solving a reinforcement learning task means, roughly, finding a policy that achieves a lot of reward over the long run. For finite MDPs, we can precisely define an optimal policy in the following way. Value functions define a partial ordering over policies. A policy $\pi$ is defined to be better than or equal to a policy $\pi^{\prime}$ if its excepted return is greater than or equal to that of $\pi^{\prime}$ for all states. In other words, $\pi \ge \pi^{\prime}$ if and only if $v_{\pi}(s) \ge v_{\pi^{\prime}}(s)$ for all $s \in \mathcal{S}$. There is always at least one policy that is better than or equal to all other policies. This is an optimal policy. Although there may be more than one, we denote all the optimal policies by $\pi_{\star}$. They share the same state-value function, called the optimal state-value function, denote $v_{\star}$, and defined as
$$
v_{\star}(s) \doteq \max_{\pi} v_{\pi}(s),
$$
for all $s \in \mathcal{S}$.

Optimal policies also share the same optimal action-value function, denoted $q_{\star}$, and defined as
$$
q_{\star}(s, a) \doteq \max_{\pi} q_{\pi}(s, a)
$$
for all $s \in \mathcal{S}$ and $a \in \mathcal{A(s)}$. For the state-action pair (s, a), this function gives the excepted return for taking action a in state s and thereafter following an optimal policy. Thus, we can write $q_{\star}$ in terms of $v_{\star}$ as follows:
$$
q_{\star}(s, a) = \mathbb{E}[R_{t+1} + \gamma v_{\star} \ | \ S_{t}=s, A_{t}=a]
$$
Suppose we solve the Bellman equation for $v_{\star}$ for the simple grid task introduced in earlier and shown again in Figure 2 (left). Recall that state A is followed by a reward of +10 and transition to state A’. while state B is followed by a reward of +5 and transition to state B’. Figure 2 (middle) shows the optimal value function, and Figure 2 (right) shows the corresponding optimal policies. Where there are multiple arrows in a cell, any of the corresponding actions are optimal.

optimal_value

Figure 2

Now, let us solve this problem:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
world = np.zeros((WORLD_SIZE, WORLD_SIZE))
while True:
# keep iteration until convergence
newWorld = np.zeros((WORLD_SIZE, WORLD_SIZE))
for i in range(0, WORLD_SIZE):
for j in range(0, WORLD_SIZE):
values = []
for action in actions:
newPosition = nextState[i][j][action]
# value iteration
values.append(actionReward[i][j][action] + discount * world[newPosition[0], newPosition[1]])
newWorld[i][j] = np.max(values)
if np.sum(np.abs(world - newWorld)) < 1e-4:
print('Optimal Policy')
print(newWorld)
break
world = newWorld

We can see the core code is as follows:

1
newWorld[i][j] = np.max(values)

The only difference between this code and the earlier code is the prior only uses the maximum value and the latter uses the weighted average.

The result is below:

1
2
3
4
5
6
Optimal Policy
[[ 21.97744338 24.41938153 21.97744338 19.41938153 17.47744338]
[ 19.77969904 21.97744338 19.77969904 17.80172914 16.02153504]
[ 17.80172914 19.77969904 17.80172914 16.02153504 14.41938153]
[ 16.02153504 17.80172914 16.02153504 14.41938153 12.97744338]
[ 14.41938153 16.02153504 14.41938153 12.97744338 11.67969904]]

It is not doubt that the result is the same as the Figure 2 (middle).