MoTBFs (version 1.2)

mop.learning: Fitting Polynomial Models

Description

These functions fit mixtures of polynomials (MOPs). Least square optimization is used to minimize the quadratic error between the empirical cumulative distribution and the estimated one.

Usage

mop.learning(X, nparam, domain)

bestMOP(X, domain, maxParam = NULL)

Arguments

X

A "numeric" data vector.

nparam

Number of parameters of the function.

domain

A "numeric" containing the range where defining the function.

maxParam

A "numeric" value which indicate the maximum number of coefficients in the function. By default it is NULL; if not, the output is the function which gets the best BIC with at most this number of parameters.

Value

mop.lerning() returns a list of n elements:

Function

An "motbf" object of the 'mop' subclass.

Subclass

'mop'.

Domain

The range where the function is defined to be a legal density function.

Iterations

The number of iterations that the optimization problem needs to minimize the errors.

Time

The time which spend the CPU for solving the problem.

bestMOP() returns a list including the polynomial function with the best BIC score, the number of parameters, the best BIC value and an array contained the BIC values of the evaluated functions.

Details

mop.learning(): The returned value $Function is the only visible element which contains the mathematical expression. Using attributes the name of the others elements are shown and also they can be abstract with $. The summary of the function also shows all this elements.

bestMOP(): The first returned value $bestPx contains the output of the mop.learning() function with the number of parameters which gets the best BIC values, taking into account the Bayesian information criterion (BIC) to penalize the functions. It evaluates the two next functions, if the BIC doesn't improve then the function with the last best BIC is returned.

See Also

univMoTBF A complete function for learning MOPs which includes extra options.

Examples

Run this code
# NOT RUN {
## 1. EXAMPLE 
data <- rnorm(1000)

## MOP with fix number of degrees
fx <- mop.learning(data, nparam=7, domain=range(data))
fx
hist(data, prob=TRUE, main="")
plot(fx, col=2, xlim=range(data), add=TRUE)

## Best MOP in terms of BIC
fMOP <- bestMOP(data, domain=range(data))
attributes(fMOP)
fMOP$bestPx
hist(data, prob=TRUE, main="")
plot(fMOP$bestPx, col=2, xlim=range(data), add=TRUE)

## 2. EXAMPLE
data <- rbeta(4000, shape1=1/2, shape2=1/2)

## MOP with fix number of degrees 
fx <- mop.learning(data, nparam=6, domain=range(data))
fx
hist(data, prob=TRUE, main="")
plot(fx, col=2, xlim=range(data), add=TRUE)

## Best MOP in terms of BIC
fMOP <- bestMOP(data, domain=range(data), maxParam=6)
attributes(fMOP)
fMOP$bestPx
attributes(fMOP$bestPx)
hist(data, prob=TRUE, main="")
plot(fMOP$bestPx, col=2, xlim=range(data), add=TRUE)
# }

Run the code above in your browser using DataCamp Workspace