Markov Decision Process environment.
transitions | [ |
---|---|
rewards | [ |
initial.state | [ |
... | [ |
makeEnvironment("MDP", transitions, rewards, initial.state, ...)
$step(action)
Take action in environment.
Returns a list with state
, reward
, done
.
$reset()
Resets the done
flag of the environment and returns an initial state.
Useful when starting a new episode.
$visualize()
Visualizes the environment (if there is a visualization function).
# Create a Markov Decision Process. P = array(0, c(2, 2, 2)) P[, , 1] = matrix(c(0.5, 0.5, 0, 1), 2, 2, byrow = TRUE) P[, , 2] = matrix(c(0, 1, 0, 1), 2, 2, byrow = TRUE) R = matrix(c(5, 10, -1, 2), 2, 2, byrow = TRUE) env = makeEnvironment("mdp", transitions = P, rewards = R) env$reset()#> [1] 0env$step(1L)#> $state #> [1] 1 #> #> $reward #> [1] 10 #> #> $done #> [1] TRUE #>