Learn R Programming

pomdp (version 0.99.0)

transition_matrix: Extract the Transition, Observation or Reward Matrices from a POMDP

Description

Converts the description of transition probabilities and observation probabilities in a POMDP into a list of matrices, one for each action. Rewards are converted into a list (actions) of lists (start states) of matrices.

Usage

transition_matrix(x, episode = 1)

Arguments

x

A POMDP object.

episode

Episode used for time-dependent POMDPs (see POMDP).

Value

A list or a list of lists of matrices.

See Also

POMDP

Examples

Run this code
# NOT RUN {
data("Tiger")

# transition matrices for each action in the from states x states
transition_matrix(Tiger)

# observation matrices for each action in the from states x observations
observation_matrix(Tiger)

# reward matrices for each matrix and (start) state in 
# the form (end) state x observation
reward_matrix(Tiger)

# Visualize transition matrix for action 'open-left'
library("igraph")
g <- graph_from_adjacency_matrix(transition_matrix(Tiger)$"open-left", weighted = TRUE)
edge_attr(g, "label") <- edge_attr(g, "weight")

igraph.options("edge.curved" = TRUE)
plot(g, layout = layout_on_grid, main = "Transitions for action 'open=left'")

# }

Run the code above in your browser using DataLab