trust.optim(x, fn, gr, hs=NULL, method=c("SR1","BFGS","Sparse"),
control=list(), ...)x as its first argument.
Returns the value of the objective function at x. Note
that the optimizer will minimize fn (see
function.scale.factor unfn at x. Naturally, the
length of the gradient must be the same as the length of x. The
fn, gr and hs. All arguments must be
named.prec.maxitTo use the sparseHessian package, you need to provide the row and column indices of the non-zero elements of the lower triangle of the Hessian. This structure cannot change during the course of the trust.optim routine. Also, you really should provide an analytic gradient. sparseHessianFD computes finite differences of the gradient, so if the gradient itself is finite-differenced, so much error is propogated through that the Hessians are nearly worthless close to the optimum.
Of course, sparseHessianFD is useful only for the Sparse method. That said, one may still get decent performance using these routines even if the Hessian is sparse, if the problem is not too large. Just treat the Hessian as if it were sparse.