smart_ind
creates a somewhat random mean-parametrized parameter vector of GMVAR model fairly close to a given
parameter vector. The result may not be stationary.
smart_ind(p, M, d, params, constraints = NULL, accuracy = 1,
which_random = numeric(0), mu_scale, mu_scale2, omega_scale,
ar_scale = 1)
a positive integer specifying the autoregressive order of the model.
a positive integer specifying the number of mixture components.
number of time series in the system.
a real valued vector specifying the parameter values. Should be size \(((M(pd^2+d+d(d+1)/2+1)-1)x1)\) and have form \(\theta\)\( = \)(\(\upsilon\)\(_{1}\), ...,\(\upsilon\)\(_{M}\), \(\alpha_{1},...,\alpha_{M-1}\)), where:
\(\upsilon\)\(_{m}\) \( = (\phi_{m,0},\)\(\phi\)\(_{m}\)\(,\sigma_{m})\)
\(\phi\)\(_{m}\)\( = (vec(A_{m,1}),...,vec(A_{m,p})\)
and \(\sigma_{m} = vech(\Omega_{m})\), m=1,...,M.
Above \(\phi_{m,0}\) is the intercept parameter, \(A_{m,i}\) denotes the \(i\):th coefficient matrix of the \(m\):th
mixture component, \(\Omega_{m}\) denotes the error term covariance matrix of the \(m\):th mixture component and \(\alpha_{m}\) is the
mixing weight parameter.
If parametrization=="mean"
, just replace each \(\phi_{m,0}\) with regimewise mean \(\mu_{m}\).
\(vec()\) is vectorization operator that stacks columns of a given matrix into a vector. \(vech()\) stacks columns
of a given matrix from the principal diagonal downwards (including elements on the diagonal) into a vector.
The notations are in line with the cited article by KMS (2016).
a size \((Mpd^2 x q)\) constraint matrix \(C\) specifying general linear constraints
to the autoregressive parameters. We consider constraints of form
(\(\phi\)\(_{1}\)\(,...,\)\(\phi\)\(_{M}) = \)\(C \psi\),
where \(\phi\)\(_{m}\)\( = (vec(A_{m,1}),...,vec(A_{m,p}) (pd^2 x 1), m=1,...,M\)
contains the coefficient matrices and \(\psi\) \((q x 1)\) contains the constrained parameters.
For example, to restrict the AR-parameters to be the same for all regimes, set \(C\)=
[I:...:I
]' \((Mpd^2 x pd^2)\) where I = diag(p*d^2)
.
Ignore (or set to NULL
) if linear constraints should not be employed.
a positive real number adjusting how close to the given parameter vector the returned individual should be. Larger number means larger accuracy. Read the source code for details.
a vector with length between 1 and M specifying the mixture components that should be random instead of
close to the given parameter vector. If constraints are employed, then this does not consider AR-coefficients. Default is NULL
.
a size \((dx1)\) vector defining means of the normal distributions from which each
mean parameter \(\mu_{m}\) is drawn from in random mutations. Default is colMeans(data)
. Note that
mean-parametrization is always used for optimization in GAfit
- even when parametrization=="intercept"
, but
input (in initpop
) and output (return value) parameter vectors may be intercept-parametrized.
a size \((dx1)\) strictly positive vector defining standard deviations of the normal
distributions from which each mean parameter \(\mu_{m}\) is drawn from in random mutations.
Default is 2*sd(data[,i]), i=1,..,d
.
a size \((dx1)\) strictly positive vector specifying the scale and variability of the
random covariance matrices in random mutations. The covariance matrices are drawn from (scaled) Wishart distribution.
Expected values of the random covariance matrices are diag(omega_scale)
. Standard deviations
of the diagonal elements are sqrt(2/d)*omega_scale[i]
and for non-diagonal elements they are sqrt(1/d*omega_scale[i]*omega_scale[j])
.
Note that for d>4
this scale may need to be chosen carefully. Default in GAfit
is
var(stats::ar(data[,i], order.max=10)$resid, na.rm=TRUE), i=1,...,d
.
a positive real number adjusting how large AR parameter values are typically generated in some random
mutations. See function random_coefmats2
for details. This is ignored when estimating constrained models.
Returns somewhat random mean-parametrized parameter vector that has form \(\theta\)\( = \)(\(\upsilon_{1}\), ...,\(\upsilon_{M}\), \(\alpha_{1},...,\alpha_{M-1}\)), where:
\(\upsilon_{m}\) \( = (\mu_{m},\)\(\phi_{m}\)\(,\sigma_{m})\)
\(\phi_{m}\)\( = (vec(A_{m,1}),...,vec(A_{m,1})\)
and \(\sigma_{m} = vech(\Omega_{m})\), m=1,...,M.
No argument checks!
Kalliovirta L., Meitz M. and Saikkonen P. 2016. Gaussian mixture vector autoregression. Journal of Econometrics, 192, 485-498.