Learn R Programming

spsann (version 1.0.1)

optimCORR: Optimization of sample configurations for spatial trend identification and estimation

Description

Optimize a sample configuration for spatial trend identification and estimation. A criterion is defined so that the sample reproduces the bivariate association/correlation between the covariates (CORR).

Usage

optimCORR(points, candi, iterations = 100, covars, strata.type = "area",
  use.coords = FALSE, x.max, x.min, y.max, y.min, acceptance = list(initial
  = 0.99, cooling = iterations/10), stopping = list(max.count =
  iterations/10), plotit = FALSE, track = FALSE, boundary,
  progress = TRUE, verbose = FALSE, greedy = FALSE, weights = NULL,
  nadir = NULL, utopia = NULL)

objCORR(points, candi, covars, strata.type = "area", use.coords = FALSE)

Arguments

points
Integer value, integer vector, data frame or matrix. If points is an integer value, it defines the number of points that should be randomly sampled from candi to form the starting system configuration. If points is a
candi
Data frame or matrix with the candidate locations for the perturbed points. candi must have two columns in the following order: [, "x"] the projected x-coordinates, and [, "y"] the projected y-coordinates.
iterations
Integer. The maximum number of iterations that should be used for the optimization. Defaults to iterations = 100.
covars
Data frame or matrix with the covariates in the columns.
strata.type
Character value setting the type of stratification that should be used to create the marginal sampling strata (or factor levels) for the numeric covariates. Available options are "area", for equal-area, and "range", for equal-ran
use.coords
Logical value. Should the geographic coordinates be used as covariates? Defaults to use.coords = FALSE.
x.max,x.min,y.max,y.min
Numeric value. The minimum and maximum quantity of random noise to be added to the projected x- and y-coordinates. The minimum quantity should be equal to, at least, the minimum distance between two neighbouring candidate locations. The units are the same
acceptance
List with two named sub-arguments: initial -- numeric value between 0 and 1 defining the initial acceptance probability, and cooling -- a numeric value defining the exponential factor by which the acceptance probability decreases
stopping
List with one named sub-argument: max.count -- integer value defining the maximum allowable number of iterations without improvement of the objective function value. Defaults to stopping = list(max.count = iterations / 10).
plotit
Logical for plotting the optimization results. This includes a) the progress of the objective function values and acceptance probabilities, and b) the original points, the perturbed points and the progress of the maximum perturbation in the x- and y-coord
track
Logical value. Should the evolution of the energy state and acceptance probability be recorded and returned with the result? If track = FALSE (the default), only the starting and ending energy state values are returned with the result.
boundary
SpatialPolygon. The boundary of the spatial domain. If missing, it is estimated from candi.
progress
Logical for printing a progress bar. Defaults to progress = TRUE.
verbose
Logical for printing messages about the progress of the optimization. Defaults to verbose = FALSE.
greedy
Logical value. Should the optimization be done using a greedy algorithm, that is, accepting only better system configurations? Defaults to greedy = FALSE. (experimental)
weights
List with named sub-arguments. The weights assigned to each one of the objective functions that form the multi-objective optimization problem (MOOP). They must be named after the respective objective function to which they apply. The weights must be equal
nadir
List with named sub-arguments. Three options are available: 1) sim -- the number of simulations that should be used to estimate the nadir point, and seeds -- vector defining the random seeds for each simulation; 2) user
utopia
List with named sub-arguments. Two options are available: 1) user -- a list of user-defined values named after the respective objective function to which they apply; 2) abs -- logical for calculating the utopia point internally (

Value

  • optimCORR returns a matrix: the optimized sample configuration.

    objCORR returns a numeric value: the energy state of the sample configuration - the objective function value.

Jittering methods

There are two ways of jittering the coordinates. They differ on how the set of candidate locations is defined. The first method uses an infinite set of candidate locations, that is, any point in the spatial domain can be selected as the new location of a perturbed point. All that this method needs is a polygon indicating the boundary of the spatial domain. This method is not implemented in the spsann package (yet) because it is computationally demanding: every time a point is jittered, it is necessary to check if it is inside the spatial domain.

The second method consists of using a finite set of candidate locations for the perturbed points. A finite set of candidate locations is created by discretizing the spatial domain, that is, creating a fine grid of points that serve as candidate locations for the perturbed points. This is the only method currently implemented in the spsann package because it is one of the least computationally demanding.

Using a finite set of candidate locations has one important inconvenience. When a point is selected to be jittered, it may be that the new location already is occupied by another point. If this happens, another location is iteratively sought for as many times as there are points in points. Because the more points there are in points, the more likely it is that the new location already is occupied by another point. If a solution is not found, the point selected to be jittered point is kept in its original location.

A more elegant method can be defined using a finite set of candidate locations coupled with a form of two-stage random sampling as implemented in spsample. Because the candidate locations are placed on a finite regular grid, they can be seen as being the centre nodes of a finite set of grid cells (or pixels of a raster image). In the first stage, one of the grid cells is selected with replacement, i.e. independently of already being occupied by another sample point. The new location for the point chosen to be jittered is selected within that grid cell by simple random sampling. This method guarantees that any location in the spatial domain can be a candidate location. It also discards the need to check if the new location already is occupied by another point. This method is not implemented (yet) in the spsann package.

Distance between two points

The distance between two points is computed as the Euclidean distance between them. This computation assumes that the optimization is operating in the two-dimensional Euclidean space, i.e. the coordinates of the sample points and candidate locations should not be provided as latitude/longitude. Package spsann has no mechanism to check if the coordinates are projected, and the user is responsible for making sure that this requirement is attained.

Multi-objective optimization

A method of solving a multi-objective optimization problem is to aggregate the objective functions into a single utility function. In the spsann package, the aggregation is performed using the weighted sum method, which incorporates in the weights the preferences of the user regarding the relative importance of each objective function.

The weighted sum method is affected by the relative magnitude of the different function values. The objective functions implemented in the spsann package have different units and orders of magnitude. The consequence is that the objective function with the largest values will have a numerical dominance in the optimization. In other words, the weights will not express the true preferences of the user, and the meaning of the utility function becomes unclear.

A solution to avoid the numerical dominance is to transform the objective functions so that they are constrained to the same approximate range of values. Several function-transformation methods can be used and the spsann offers a few of them. The upper-lower-bound approach requires the user to inform the maximum (nadir point) and minimum (utopia point) absolute function values. The resulting function values will always range between 0 and 1.

Using the upper-bound approach requires the user to inform only the nadir point, while the utopia point is set to zero. The upper-bound approach for transformation aims at equalizing only the upper bounds of the objective functions. The resulting function values will always be smaller than or equal to 1.

Sometimes, the absolute maximum and minimum values of an objective function can be calculated exactly. This seems not to be the case of the objective functions implemented in the spsann package. If the user is uncomfortable with informing the nadir and utopia points, there is the option for using numerical simulations. It consists in computing the function value for many random sample configurations. The mean function value is used to set the nadir point, while the the utopia point is set to zero. This approach is similar to the upper-bound approach, but the function values will have the same orders of magnitude only at the starting point of the optimization. Function values larger than one are likely to occur during the optimization. We recommend the user to avoid this approach whenever possible because the effect of the starting point on the optimization as a whole usually is insignificant or arbitrary.

The upper-lower-bound approach with the Pareto maximum and minimum values is the most elegant solution to transform the objective functions. However, it is the most time consuming. It works as follows:

  1. Optimize a sample configuration with respect to each objective function that composes the MOOP;
  2. Compute the function value of every objective function for every optimized sample configuration;
  3. Record the maximum and minimum absolute function values computed for each objective function--these are the Pareto maximum and minimum.

For example, consider that a MOOP is composed of two objective functions (A and B). The minimum absolute value for function A is obtained when the sample configuration is optimized with respect to function A. This is the Pareto minimum of function A. Consequently, the maximum absolute value for function A is obtained when the sample configuration is optimized regarding function B. This is the Pareto maximum of function A. The same logic applies for function B.

Association/Correlation between covariates

The correlation between two numeric covariates is measured using the Pearson's r, a descriptive statistic that ranges from $-1$ to $+1$. This statistic is also known as the linear correlation coefficient.

When the set of covariates includes factor covariates, all numeric covariates are transformed into factor covariates. The factor levels are defined using the marginal sampling strata created from one of the two methods available (equal-area or equal-range strata).

The association between two factor covariates is measured using the Cramér's v, a descriptive statistic that ranges from $0$ to $+1$. The closer to $+1$ the Cramér's v is, the stronger the association between two factor covariates. The main weakness of using the Cramér's v is that, while the Pearson's r shows the degree and direction of the association between two covariates (negative or positive), the Cramér's v only measures the degree of association (weak or strong).

concept

simulated annealing

spatial trend

References

Edzer Pebesma, Jon Skoien with contributions from Olivier Baume, A. Chorti, D.T. Hristopulos, S.J. Melles and G. Spiliopoulos (2013). intamapInteractive: procedures for automated interpolation - methods only to be used interactively, not included in intamap package. R package version 1.1-10.

van Groenigen, J.-W. Constrained optimization of spatial sampling: a geostatistical approach. Wageningen: Wageningen University, p. 148, 1999.

Cramér, H. Mathematical methods of statistics. Princeton: Princeton University Press, p. 575, 1946.

Everitt, B. S. The Cambridge dictionary of statistics. Cambridge: Cambridge University Press, p. 432, 2006.

Hyndman, R. J.; Fan, Y. Sample quantiles in statistical packages. The American Statistician, v. 50, p. 361-365, 1996.

Minasny, B.; McBratney, A. B. A conditioned Latin hypercube method for sampling in the presence of ancillary information. Computers & Geosciences, v. 32, p. 1378-1388, 2006.

Minasny, B.; McBratney, A. B. Conditioned Latin Hypercube Sampling for calibrating soil sensor data to soil properties. Chapter 9. Viscarra Rossel, R. A.; McBratney, A. B.; Minasny, B. (Eds.) Proximal Soil Sensing. Amsterdam: Springer, p. 111-119, 2010.

Mulder, V. L.; de Bruin, S.; Schaepman, M. E. Representing major soil variability at regional scale by constrained Latin hypercube sampling of remote sensing data. International Journal of Applied Earth Observation and Geoinformation, v. 21, p. 301-310, 2013.

Roudier, P.; Beaudette, D.; Hewitt, A. A conditioned Latin hypercube sampling algorithm incorporating operational constraints. 5th Global Workshop on Digital Soil Mapping. Sydney, p. 227-231, 2012.

Arora, J. Introduction to optimum design. Waltham: Academic Press, p. 896, 2011.

Marler, R. T.; Arora, J. S. Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization, v. 26, p. 369-395, 2004.

Marler, R. T.; Arora, J. S. Function-transformation methods for multi-objective optimization. Engineering Optimization, v. 37, p. 551-570, 2005.

Marler, R. T.; Arora, J. S. The weighted sum method for multi-objective optimization: new insights. Structural and Multidisciplinary Optimization, v. 41, p. 853-862, 2009.

See Also

clhs, cramer

Examples

Run this code
require(sp)
data(meuse.grid)
candi <- meuse.grid[, 1:2]
covars <- meuse.grid[, 5]
set.seed(2001)
# This example takes more than 5 seconds to run!
res <- optimCORR(points = 100, candi = candi, covars = covars,
                 use.coords = TRUE)
objSPSANN(res) # 0.06386069
objCORR(points = res, candi = candi, covars = covars, use.coords = TRUE)
# Random sample
pts <- sample(1:nrow(candi), 5)
pts <- cbind(pts, candi[pts, ])
objCORR(points = pts, candi = candi, covars = covars, use.coords = TRUE)

Run the code above in your browser using DataLab