An R6 class representing a semi-Markov model for cohort simulation.
A Markov model is a directed multidigraph permitting loops (a loop multidigraph), optionally labelled, or a quiver. It is a multidigraph because there are potentially two edges between each pair of nodes A,B representing the transition probabilities from A to B and vice versa. It is a directed graph because the transition probabilities refer to transitions in one direction. Each edge can be optionally labelled. It permits self-loops (edges whose source and target are the same node) to represent patients that remain in the same state between cycles.
Beck and Pauker (1983) and later Sonnenberg and Beck (1993) proposed the use of Markov processes to model the health economics of medical interventions. Further, they introduced the additional concept of temporary states, to which patients who transition remain for exactly one cycle. This breaks the principle that Markov processes are memoryless and thus the underlying mathematical formalism, first developed by Kolmogorov, is not applicable. For example, ensuring that all patients leave a temporary state requires its transition rate to be infinite. Hence, such models are usually labelled as semi-Markov processes.
Miller and Homan (1994) and Fleurence & Hollenbeak (2007) provide advice on estimating probabilities from rates. Jones (2017) and Welton (2005) describe methods for estimating probabilities in multi-state, multi-transition models, although those methods may not apply to semi-Markov models with temporary states. In particular note that the "simple" equation, \(p = 1-e^{-rt}\) (Briggs 2006) applies only in a two-state, one transition model.
In semi-Markov models, the conditional probabilities of the transitions from each state are usually modelled by a Dirichlet distribution. In rdecision, create a Dirichlet distribution for each state and optionally create model variables for each conditional probability (\(\rho_{ij}\)) linked to an applicable Dirichlet distribution.
Andrew J. Sims andrew.sims@newcastle.ac.uk
rdecision::Graph
-> rdecision::Digraph
-> SemiMarkovModel
Inherited methods
rdecision::Graph$degree()
rdecision::Graph$edge_along()
rdecision::Graph$edge_at()
rdecision::Graph$edge_index()
rdecision::Graph$graph_adjacency_matrix()
rdecision::Graph$has_edge()
rdecision::Graph$has_vertex()
rdecision::Graph$is_simple()
rdecision::Graph$neighbours()
rdecision::Graph$order()
rdecision::Graph$size()
rdecision::Graph$vertex_along()
rdecision::Graph$vertex_at()
rdecision::Graph$vertex_index()
rdecision::Digraph$as_DOT()
rdecision::Digraph$digraph_adjacency_matrix()
rdecision::Digraph$digraph_incidence_matrix()
rdecision::Digraph$direct_predecessors()
rdecision::Digraph$direct_successors()
rdecision::Digraph$is_acyclic()
rdecision::Digraph$is_arborescence()
rdecision::Digraph$is_connected()
rdecision::Digraph$is_polytree()
rdecision::Digraph$is_tree()
rdecision::Digraph$is_weakly_connected()
rdecision::Digraph$paths()
rdecision::Digraph$topological_sort()
rdecision::Digraph$walk()
new()
Creates a semi-Markov model for cohort simulation.
SemiMarkovModel$new(
V,
E,
tcycle = as.difftime(365.25, units = "days"),
discount.cost = 0,
discount.utility = 0
)
V
A list of nodes (MarkovState
s).
E
A list of edges (Transition
s).
tcycle
Cycle length, expressed as an R difftime
object.
discount.cost
Annual discount rate for future costs. Note this is a rate, not a probability (i.e. use 0.035 for 3.5%).
discount.utility
Annual discount rate for future incremental utility. Note this is a rate, not a probability (i.e. use 0.035 for 3.5%).
A semi-Markov model must meet the following conditions:
It must have at least one node and at least one edge.
All nodes must be of class MarkovState
;
All edges must be of class Transition
;
The nodes and edges must form a digraph whose underlying graph is connected;
Each state must have at least one outgoing transition (for absorbing states this is a self-loop);
For each state the sum of outgoing conditional transition probabilities must be one. For convenience, one outgoing transition probability from each state may be set to NA when the probabilities are defined. Typically, probabilities for self loops would be set to NA. Transition probabilities in \(Pt\) associated with transitions that are not defined as edges in the graph are zero. Probabilities can be changed between cycles.
No two edges may share the same source and target nodes (i.e. the digraph may not have multiple edges). This is to ensure that there are no more transitions than cells in the transition matrix.
The node labels must be unique to the graph.
A SemiMarkovModel
object. The population of the first
state is set to 1000 and from each state there is an equal
conditional probability of each allowed transition.
set_probabilities()
Sets transition probabilities.
SemiMarkovModel$set_probabilities(Pt)
Pt
Per-cycle transition probability matrix. The row and
column labels must be the state names and each row must sum to one.
Non-zero probabilities for undefined transitions are not allowed. At
most one NA
may appear in each row. If an NA is present in a row,
it is replaced by 1 minus the sum of the defined probabilities.
Updated SemiMarkovModel
object
transition_probabilities()
Per-cycle transition probability matrix for the model.
SemiMarkovModel$transition_probabilities()
A square matrix of size equal to the number of states. If all states are labelled, the dimnames take the names of the states.
transition_cost()
Return the per-cycle transition costs for the model.
SemiMarkovModel$transition_cost()
A square matrix of size equal to the number of states. If all states are labelled, the dimnames take the names of the states.
get_statenames()
Returns a character list of state names.
SemiMarkovModel$get_statenames()
List of the names of each state.
reset()
Resets the model counters.
SemiMarkovModel$reset(
populations = NULL,
icycle = as.integer(0),
elapsed = as.difftime(0, units = "days")
)
populations
A named vector of populations for
the start of the state. The names should be the state names.
Due to the R implementation of matrix algebra, populations
must be a numeric type and is not restricted to being an integer. If
NULL, the population of the first state is set to 1000 and the others
to zero.
icycle
Cycle number at which to start/restart.
elapsed
Elapsed time since the index (reference) time used for
discounting as an R difftime
object.
Resets the state populations, next cycle number and elapsed time
of the model. By default the model is returned to its ground state (1000
people in the first state and zero in the others; next cycle is labelled
zero; elapsed time (years) is zero). Any or all of these can be set via
this function. icycle
is simply an integer counter label for each
cycle, elapsed
sets the elapsed time in years from the index time
from which discounting is assumed to apply.
Updated SemiMarkovModel
object.
get_populations()
Gets the occupancy of each state
SemiMarkovModel$get_populations()
A numeric vector of populations, named with state names.
get_elapsed()
Gets the current elapsed time.
SemiMarkovModel$get_elapsed()
The elapsed time is defined as the difference between the
current time in the model and an index time used as the reference
time for applying discounting. By default the elapsed time starts at
zero. It can be set directly by calling reset
. It is incremented
after each call to cycle
by the cycle duration to the time at the
end of the cycle (even if half cycle correction is used). Thus, via the
reset
and cycle
methods, the time of each cycle relative
to the discounting index and its duration can be controlled arbitrarily.
Elapsed time as an R difftime
object.
tabulate_states()
Tabulation of states
SemiMarkovModel$tabulate_states()
Creates a data frame summary of each state in the model.
A data frame with the following columns:
State name
Annual cost of occupying the state
Incremental utility associated with being in the state.
cycle()
Applies one cycle of the model.
SemiMarkovModel$cycle(hcc.pop = TRUE, hcc.cost = TRUE)
hcc.pop
Boolean; whether to apply half cycle correction to the
population and QALY. If TRUE, the correction is only applied to the
outputs of
functions cycle
and cycles
; the state population passed to
the next cycle is the end cycle population, obtainable
with get_populations
.
hcc.cost
Boolean; whether to apply half cycle correction to the
costs. If true, the occupancy costs are computed using the population
at half cycle; if false they are applied at the end of the cycle.
Applicable only if hcc.pop
is TRUE.
Calculated values, one row per state, as a data frame with the following columns:
State
Name of the state.
Cycle
The cycle number.
Time
Clock time, years.
Population
Population of the state at the end of the cycle, or at mid-cycle if half-cycle correction is applied.
OccCost
Cost of the population occupying the state for
the cycle. Discount is applied, if the options are set. The costs are
normalized by the model population. The cycle costs are derived from the
annual occupancy costs of the MarkovState
s. Applied to the end
population, i.e. unaffected by half cycle correction, as per
Briggs et al.
EntryCost
Cost of the transitions into the state
during the cycle. Discounting is applied, if the option is set.
The result is normalized by the model population. The cycle costs
are derived from Transition
costs.
Cost
Total cost, normalized by model population.
QALY
Quality adjusted life years gained by occupancy of the states during the cycle. Half cycle correction and discounting are applied, if these options are set. Normalized by the model population.
cycles()
Applies multiple cycles of the model.
SemiMarkovModel$cycles(ncycles = 2, hcc.pop = TRUE, hcc.cost = TRUE)
ncycles
Number of cycles to run; default is 2.
hcc.pop
Boolean; whether to apply half cycle correction to the
population and QALY. If TRUE, the correction is only applied to the
outputs of functions cycle
and cycles
; the state
population passed to
the next cycle is the end cycle population, obtainable
with get_populations
.
hcc.cost
Boolean; whether to apply half cycle correction to the
costs. If true, the occupancy costs are computed using the population
at half cycle; if false they are applied at the end of the cycle.
Applicable only if hcc.pop
is TRUE.
The starting populations are redistributed through the
transition probabilities and the state occupancy costs are
calculated, using function cycle
. The end populations are
then fed back into the model for a further cycle and the
process is repeated. For each cycle, the state populations and
the aggregated occupancy costs are saved in one row of the
returned data frame, with the cycle number. If the cycle count
for the model is zero when called, the first cycle reported
will be cycle zero, i.e. the distribution of patients to starting
states.
Data frame with cycle results, with the following columns:
Cycle
The cycle number.
Years
Elapsed time at end of cycle, years
Cost
Cost associated with occupancy and transitions between states during the cycle.
QALY
Quality adjusted life years associated with occupancy of the states in the cycle.
<name>
Population of state <name>
at the end of
the cycle.
modvars()
Find all the model variables in the Markov model.
SemiMarkovModel$modvars()
Returns variables of type ModVar
that have been
specified as values associated with transition rates and costs and
the state occupancy costs and utilities.
A list of ModVar
s.
modvar_table()
Tabulate the model variables in the Markov model.
SemiMarkovModel$modvar_table(expressions = TRUE)
expressions
A logical that defines whether expression model variables should be included in the tabulation.
Data frame with one row per model variable, as follows:
Description
As given at initialization.
Units
Units of the variable.
Distribution
Either the uncertainty distribution, if
it is a regular model variable, or the expression used to create it,
if it is an ExprModVar
.
Mean
Mean; calculated from means of operands if an expression.
E
Expectation; estimated from random sample if expression, mean otherwise.
SD
Standard deviation; estimated from random sample if expression, exact value otherwise.
Q2.5
p=0.025 quantile; estimated from random sample if expression, exact value otherwise.
Q97.5
p=0.975 quantile; estimated from random sample if expression, exact value otherwise.
Est
TRUE if the quantiles and SD have been estimated by random sampling.
clone()
The objects of this class are cloneable with this method.
SemiMarkovModel$clone(deep = FALSE)
deep
Whether to make a deep clone.
A class to represent a continuous time semi-Markov chain, modelled using cohort simulation. As interpreted in rdecision, semi-Markov models may include temporary states and transitions are defined by per-cycle probabilities. Although used widely in health economic modelling, the differences between semi-Markov models and Markov processes introduce some caveats for modellers:
If there are temporary states, the result will depend on cycle length.
Transitions are specified by their conditional probability, which is a per-cycle probability of starting a cycle in one state and ending it in another; if the cycle length changes, the probabilities should change, too.
Probabilities and rates cannot be linked by the Kolmogorov forward equation, where the per-cycle probabilities are given by the matrix exponential of the transition rate matrix, because this equation does not apply if there are temporary states. In creating semi-Markov models, it is the modeller's task to estimate probabilities from published data on event rates.
The cycle time cannot be changed during the simulation.
Beck JR and Pauker SG. The Markov Process in Medical Prognosis. Med Decision Making, 1983;3:419–458. Briggs A, Claxton K, Sculpher M. Decision modelling for health economic evaluation. Oxford, UK: Oxford University Press; 2006. Fleurence RL and Hollenbeak CS. Rates and probabilities in economic modelling. PharmacoEconomics, 2007;25:3--6. Jones E, Epstein D and García-Mochón L. A procedure for deriving formulas to convert transition rates to probabilities for multistate Markov models. Medical Decision Making 2017;37:779–789. Miller DK and Homan SM. Determining transition probabilities: confusion and suggestions. Medical Decision Making 1994;14:52-58. Sonnenberg FA, Beck JR. Markov models in medical decision making: a practical guide. Medical Decision Making, 1993:13:322. Welton NJ and Ades A. Estimation of Markov chain transition probabilities and rates from fully and partially observed data: uncertainty propagation, evidence synthesis, and model calibration. Medical Decision Making, 2005;25:633-645.