optimCORR(points, candi, iterations = 100, covars, strata.type = "area",
use.coords = FALSE, x.max, x.min, y.max, y.min, acceptance = list(initial
= 0.99, cooling = iterations/10), stopping = list(max.count =
iterations/10), plotit = FALSE, track = FALSE, boundary,
progress = TRUE, verbose = FALSE, greedy = FALSE, weights = NULL,
nadir = NULL, utopia = NULL)objCORR(points, candi, covars, strata.type = "area", use.coords = FALSE)
points
is an integer value, it defines the number of points that
should be randomly sampled from candi
to form the starting system
configuration. If points
is acandi
must have two columns in the following
order: [, "x"]
the projected x-coordinates, and [, "y"]
the
projected y-coordinates.iterations = 100
."area"
, for equal-area,
and "range"
, for equal-ranuse.coords = FALSE
.initial
--
numeric value between 0 and 1 defining the initial acceptance probability,
and cooling
-- a numeric value defining the exponential factor by
which the acceptance probability decreasesmax.count
--
integer value defining the maximum allowable number of iterations without
improvement of the objective function value. Defaults to
stopping = list(max.count = iterations / 10)
.track = FALSE
(the default), only the starting and ending energy state
values are returned with the result.candi
.progress = TRUE
.verbose = FALSE
.greedy = FALSE
. (experimental)sim
-- the number of simulations that should be used to estimate
the nadir point, and seeds
-- vector defining the random seeds for
each simulation; 2) user
user
-- a list of user-defined values named after the respective
objective function to which they apply; 2) abs
-- logical for
calculating the utopia point internally (optimCORR
returns a matrix: the optimized sample configuration.objCORR
returns a numeric value: the energy state of the sample
configuration - the objective function value.
The second method consists of using a finite set of candidate
locations for the perturbed points. A finite set of candidate locations is
created by discretizing the spatial domain, that is, creating a fine grid of
points that serve as candidate locations for the perturbed points. This is
the only method currently implemented in the
Using a finite set of candidate locations has one important inconvenience.
When a point is selected to be jittered, it may be that the new location
already is occupied by another point. If this happens, another location is
iteratively sought for as many times as there are points in points
.
Because the more points there are in points
, the more likely it is
that the new location already is occupied by another point. If a solution is
not found, the point selected to be jittered point is kept in its original
location.
A more elegant method can be defined using a finite set of candidate
locations coupled with a form of two-stage random sampling as
implemented in spsample
. Because the candidate
locations are placed on a finite regular grid, they can be seen as being the
centre nodes of a finite set of grid cells (or pixels of a raster image). In
the first stage, one of the
The weighted sum method is affected by the relative magnitude of the
different function values. The objective functions implemented in the
A solution to avoid the numerical dominance is to transform the objective
functions so that they are constrained to the same approximate range of
values. Several function-transformation methods can be used and the
Using the upper-bound approach requires the user to inform only the nadir point, while the utopia point is set to zero. The upper-bound approach for transformation aims at equalizing only the upper bounds of the objective functions. The resulting function values will always be smaller than or equal to 1.
Sometimes, the absolute maximum and minimum values of an objective function
can be calculated exactly. This seems not to be the case of the objective
functions implemented in the
The upper-lower-bound approach with the Pareto maximum and minimum values is the most elegant solution to transform the objective functions. However, it is the most time consuming. It works as follows:
For example, consider that a MOOP is composed of two objective functions (A and B). The minimum absolute value for function A is obtained when the sample configuration is optimized with respect to function A. This is the Pareto minimum of function A. Consequently, the maximum absolute value for function A is obtained when the sample configuration is optimized regarding function B. This is the Pareto maximum of function A. The same logic applies for function B.
When the set of covariates includes factor covariates, all numeric covariates are transformed into factor covariates. The factor levels are defined using the marginal sampling strata created from one of the two methods available (equal-area or equal-range strata).
The association between two factor covariates is measured using the Cramér's v, a descriptive statistic that ranges from $0$ to $+1$. The closer to $+1$ the Cramér's v is, the stronger the association between two factor covariates. The main weakness of using the Cramér's v is that, while the Pearson's r shows the degree and direction of the association between two covariates (negative or positive), the Cramér's v only measures the degree of association (weak or strong).
spatial trend
intamap
package. R
package version 1.1-10.van Groenigen, J.-W. Constrained optimization of spatial sampling: a geostatistical approach. Wageningen: Wageningen University, p. 148, 1999.
Cramér, H. Mathematical methods of statistics. Princeton: Princeton University Press, p. 575, 1946.
Everitt, B. S. The Cambridge dictionary of statistics. Cambridge: Cambridge University Press, p. 432, 2006.
Hyndman, R. J.; Fan, Y. Sample quantiles in statistical packages. The American Statistician, v. 50, p. 361-365, 1996.
Minasny, B.; McBratney, A. B. A conditioned Latin hypercube method for sampling in the presence of ancillary information. Computers & Geosciences, v. 32, p. 1378-1388, 2006.
Minasny, B.; McBratney, A. B. Conditioned Latin Hypercube Sampling for calibrating soil sensor data to soil properties. Chapter 9. Viscarra Rossel, R. A.; McBratney, A. B.; Minasny, B. (Eds.) Proximal Soil Sensing. Amsterdam: Springer, p. 111-119, 2010.
Mulder, V. L.; de Bruin, S.; Schaepman, M. E. Representing major soil variability at regional scale by constrained Latin hypercube sampling of remote sensing data. International Journal of Applied Earth Observation and Geoinformation, v. 21, p. 301-310, 2013.
Roudier, P.; Beaudette, D.; Hewitt, A. A conditioned Latin hypercube sampling algorithm incorporating operational constraints. 5th Global Workshop on Digital Soil Mapping. Sydney, p. 227-231, 2012.
Arora, J. Introduction to optimum design. Waltham: Academic Press, p. 896, 2011.
Marler, R. T.; Arora, J. S. Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization, v. 26, p. 369-395, 2004.
Marler, R. T.; Arora, J. S. Function-transformation methods for multi-objective optimization. Engineering Optimization, v. 37, p. 551-570, 2005.
Marler, R. T.; Arora, J. S. The weighted sum method for multi-objective optimization: new insights. Structural and Multidisciplinary Optimization, v. 41, p. 853-862, 2009.
clhs
, cramer
require(sp)
data(meuse.grid)
candi <- meuse.grid[, 1:2]
covars <- meuse.grid[, 5]
set.seed(2001)
# This example takes more than 5 seconds to run!
res <- optimCORR(points = 100, candi = candi, covars = covars,
use.coords = TRUE)
objSPSANN(res) # 0.06386069
objCORR(points = res, candi = candi, covars = covars, use.coords = TRUE)
# Random sample
pts <- sample(1:nrow(candi), 5)
pts <- cbind(pts, candi[pts, ])
objCORR(points = pts, candi = candi, covars = covars, use.coords = TRUE)
Run the code above in your browser using DataLab