RJaCGH(y, Chrom = NULL, Start=NULL, End=NULL, Pos = NULL, Dist=NULL, probe.names=NULL, maxVar=NULL, model = "Genome", var.equal=TRUE, max.dist=NULL, normal.reference=0, window = NULL, burnin = 10000, TOT =10000, k.max = 6, stat = NULL, mu.alfa = NULL, mu.beta = NULL, s1=NULL, s2=NULL, init.mu=NULL, init.sigma.2=NULL, init.beta=NULL, prob.k = NULL, jump.parameters=list(), start.k = NULL, RJ=TRUE, NC = 1, deltaT = 2, delete_gzipped = .__DELETE_GZIPPED, singleState = FALSE)Start, End or Pos, but only in
    one of the two ways.Chrom is not NULL,
    every last value of every Chromosome is not used.NULL, the range of the data is chosen.model="Genome", the same model is fitted for
    the whole genome. If model="Chrom", a different model is
    fitted for each chromosome.TRUE the variances of the hidden
    states are restricted to be the same.max.dist, they
    are considered independent. That is, the state of that spot
    does not affect the state of the other. If NULL (the default) the
    maximum Dist or maximum difference in Pos is taken.0.NULL, it is assumed a
    uniform distribution for every model.NULL, a random draw from
    prob.k is chosen.TRUE, Reversible Jump is performed.
    If not, MCMC
    over a fixed number of hidden states. Note that if FALSE, most
    of the methods for extracting information won't work.NULL, it is the prior mu.beta.
  NULL, it is the prior mu.beta.mu for the chain. See detailssigma.2 for the chain. See detailsbeta for the chain. See detailsrelabelStatesRJaCGH.array is returned, with components:
  model is "Genome", an object of class RJaCGH.Genome
  is returned, with components:
  pREC_A and pREC_S).model is "Chrom", an object of class RJaCGH.Chrom is
  returned, with the following components: RJaCGH (See below).model was
  specified and no Chrom was given, an object of class
  RJaCGH is returned, with components k, prob.b,
  prob.d, prob.s, prob.c, y, Pos,
  x, viterbi (but as there are no chroms, viterbi has a
  one component list ---analogous to having a single chromosome), as
  described before, plus a list with as many components of number of max
  hidden states fitted.  The length of k equals aproximately $2$
  times TOT, because in every sweep of the algorithm there are
  two tries to jump between models, so two times to explore the
  probability of the number of hidden states.  For every hidden markov
  model fitted, a list is returned with components:mu in the
    Metropolis-Hastings step.sigma.2 in the
    Metropolis-Hastings step.beta in the
    Metropolis-Hastings step.mu, sigma.2 and
  beta is random, because it depends on the number of times
  a particular model is visited and on the number of moves between
  models, because when we visit a new model we also explore the space
  of its means, variances and parameters of its transition functions.
prob.k. If NULL, a uniform distribution between 1
  and k.max is used.  
  The hidden states follow a normal distribution which mean (mu)
  follows
  itself a normal distribution with mean
  mu.alfa and stdev mu.beta. If NULL, these are the
  median of the data and the range. The square
  root of the variance (sigma.2)of the hidden states
  follows a uniform distribution between $0$ and maxVar.
The model for the transition matrix is based on a random matrix beta whose diagonal is zero. The transition matrix, Q, has the form: Q[i,j] = exp(-beta[i,j] + beta[i,j]*x) / sum(i,.) exp(-beta[i,.] + beta[i,.]*x
The prior distribution for beta is gamma with parameters 1, 1. The x are the distances between positions, normalized to lay between zero and 1 (x=diff(Pos) / max(diff(Pos)))
  
  RJaCGH performs Markov Chain MonteCarlo with Reversible Jump to sample
  for the posterior distribution.
  NC sets the number of chains from we will sample in
  parallel. Each of them is tempered in order to escape from local
  maximum. The temper parameter is deltaT, and it can be
  a value greater than $0$. each chain is tempered according to:
  $ 1 / (1 + deltaT * NC) $
  
  Every sweep is performed for all chains and has 3 steps plus
  another one common for all of them:
  
  1.- A Metropolis-Hastings move is used to update, for a fixed number
  of hidden states, mu, sigma.2 and beta. A
  symmetric proposal with a normal distribution and standard deviation
  sigma.tau.mu, sigma.tau.sigma.2 and
  sigma.tau.beta is sampled.
  
  2.- A transdimensional move is chosen, between birth (a new hidden
  state is sampled from the prior) or death (an existing hidden state is
  erased). Both moves are tried using delayed rejection. That is, if
  the move is rejected, is given another try. The means for the new
  state are drawn for the priors, but the standard deviation can be
  set for the two stages with parameters s1 and s2.
  
  3.- Another transdimensional move is performed; an split move (divide
  an existing state in two) or a combine move (join two adjacent
  states). The length of the split is sampled from a normal distribution
  with standard deviation tau.split.mu for the mu and
  tau.split.beta for beta.
  4.- If NC is greater than 1, a swap move is tried to exchange
  information from two of the coupled parallel chains.
  
  jump.parameters must be a list with the parameters for the
  moves. It must have components sigma.tau.mu,
  sigma.tau.sigma.2, sigma.tau.beta These are vectors of
  length k.max. tau.split.mu,  tau.split.beta are vectors of
  length 1. If any of them is NULL, a call to the internal function
  get.jump() is made to find 'good' values.
  A relabelling of hidden states is performed to match biological
  states. See details in relabelStates.
  
  The initial values of the chain are drawn from an overdispersed
  distribution. One can start the chain in a given point with
  the parameters start.k (model to start from $1, ...,
  max.k$) and the initial values init.mu,
  init.sigma.2 (vectors of dimension start.k) and
  init.beta (matrix of positive values with start.k)
  rows and start.k) columns. The diagonal must be zero.
  
Green, P.J. and Antonietta, M. (2001) Delayed Rejection in Reversible Jump Metropolis Hastings. Biometrika, 88 (4), 1035-1053.
Geyer, C. J. (1991). Markov Chain Monte Carlo Maximum Likelihood. Proceedings of the 23th Symposium on the Interface, 156-163.
summary.RJaCGH,
  states, modelAveraging,
  plot.RJaCGH, trace.plot,y <- c(rnorm(100, 0, 1), rnorm(10, -3, 1), rnorm(20, 3, 1),
       rnorm(100,0, 1)) 
Pos <- sample(x=1:500, size=230, replace=TRUE)
Pos <- cumsum(Pos)
Chrom <- rep(1:23, rep(10, 23))
jp <- list(sigma.tau.mu=rep(0.05, 4), sigma.tau.sigma.2=rep(0.03, 4),
           sigma.tau.beta=rep(0.07, 4), tau.split.mu=0.1, tau.split.beta=0.1)
fit.chrom <- RJaCGH(y=y, Pos=Pos, Chrom=Chrom, model="Chrom",
                    burnin=10, TOT=1000, k.max = 4,
                    jump.parameters=jp)
##RJ results for chromosome 5
table(fit.chrom[["array1"]][[5]]$k)
fit.genome <- RJaCGH(y=y, Pos=Pos, Chrom=Chrom, model="Genome",
burnin=100, TOT=1000, jump.parameters=jp, k.max = 4)
## Results for the model with 3 states:
summary(fit.genome)
Run the code above in your browser using DataLab