
estimating the optimal control using the dynamic elastic net
optimal_control_gradient_descent(
alphaStep,
armijoBeta,
x0,
parameters,
alpha1,
alpha2,
measData,
constStr,
SD,
modelFunc,
measFunc,
modelInput,
optW,
origAUC,
maxIteration,
plotEsti,
conjGrad,
eps,
nnStates,
verbose
)
starting value of the stepsize for the gradient descent, will be calculate to minimize the cost function by backtracking algorithm
scaling of the alphaStep to find a approximately optimal value for the stepsize
initial state of the ode system
parameters of the ODE-system
L1 cost term scalar
L2 cost term scalar
measured values of the experiment
a string that represents constrains, can be used to calculate a hidden input for a component that gradient is zero
standard deviation of the experiment; leave empty if unknown; matrix should contain the timesteps in the first column
function that describes the ODE-system of the model
function that maps the states to the outputs
an dataset that describes the external input of the system
vector that indicated at which knots of the network the algorithm should estimate the hidden inputs
AUCs of the first optimization; only used by the algorithm
a upper bound for the maximal number of iterations
boolean that controls of the current estimates should be plotted
boolean that indicates the usage of conjugate gradient method over the normal steepest descent
citeria for stopping the algorithm
a bit vector indicating the states that should be non negative
Boolean indicating if an output in the console should be created to display the gradient descent steps
A list containing the estimated hidden inputs, the AUCs, the estimated states and resulting measurements and the cost function