Learn R Programming

PST (version 0.81)

pstree: Build a probabilistic suffix tree

Description

Build a probabilistic suffix tree that stores a variable length Markov chain (VLMC) model

Usage

## S3 method for class 'stslist':
pstree(object, group, L, cdata=NULL, stationary=TRUE, 
	nmin = 1, ymin=NULL, weighted = TRUE, with.missing = FALSE)

Arguments

object
a sequence object, i.e., an object of class 'stslist' as created by TraMineR seqdef function.
group
a vector giving the group membership for each observation in x. If specified, a 'grouped' PST is produced containing one PST for each group.
cdata
Not implemented yet.
stationary
Not implemented yet.
L
integer. Maximal depth of the PST. Default to maximum length of the sequence(s) in object minus 1.
nmin
minimum number of occurences of a string to add it in the tree
ymin
smoothing parameter for probabilities, so that no null probability is
weighted
logical. If TRUE, weights attached to the sequence object are used in the estimation of probabilities.
with.missing
logical. If TRUE, the missing state is added to the alphabet

Value

  • An object of class "PSTf".

Details

A probabilistic suffix tree (PST) is built from a learning sample of $n, \; n \geq 1$ sequences by successively adding nodes corresponding to subsequences (contexts) $c$ of length $L, \; 0 \leq L \leq L_{max}$. When the value $L_{max}$ is not defined by the user it is set to its theorectical maximum $\ell-1$ where $\ell$ is the maximum sequence length in the learning sample. Each node of the tree is labelled with a context $c$ and stores the next symbol empirical probability distribution $\hat{P}(\sigma|c), \; \sigma \in A$, where $A$ is an alphabet of finite size. The root node labelled with the empty string $e$ stores the $0th$ order probability $\hat{P}(\sigma), \; \sigma \in A$ of oberving each symbol of the alphabet in the whole learning sample. The building algorithm calls the cprob function which returns the empirical next symbol counts observed after each context $c$ and computes the corresponding empirical probability distribution. Each node in the tree is connected to its longest suffix, where the longest suffix of a string $c=c_{1},c_{2}, \ldots, c_{k}$ of length $k$ is $suffix(c)=c_{2}, \ldots, c_{k}$.

References

Maechler, M. & Buehlmann, P. Variable Length Markov Chains: Methodology, Computing, and Software Journal of Computational and Graphical Statistics, 2004, 13, pp. 435-455

Ron, D.; Singer, Y. & Tishby, N. The power of amnesia: Learning probabilistic automata with variable memory length Machine Learning, 1996, 25, 117-149

Bejerano, G. & Yona, G. Variations on probabilistic suffix trees: statistical modeling and prediction of protein families. Bioinformatics, 2001, 17, 23-43

See Also

prune

Examples

Run this code
## Build a PST on one single sequence
data(s1)
s1.seq <- seqdef(s1)
s1.seq
S1 <- pstree(s1.seq, L = 3)
print(S1, digits = 3)
S1

Run the code above in your browser using DataLab