This function estimates the difference, absolute difference, and squared
difference on x, y and z coordinates of two sets of ground control points
(GCP). It also estimates the module (difference vector), its square and
azimuth. The result is a data frame ready to be used to define a object of
class spsurvey.object.
gcpDiff(measured, predicted, type = "xy", aggregate = FALSE, rounding = 0)Object of class SpatialPointsDataFrame
with the reference GCP. A column named ‘siteID’ giving case names is
mandatory. See ‘Details’, item ‘Type of data’.
An object of class
SpatialPointsDataFrame with the point data being
validated. A column named ‘siteID’ giving case names is mandatory.
See ‘Details’, item ‘Type of data’.
Type of data under analysis. Defaults to type = "xy".
‘Details’, item ‘Type of data’.
Logical for aggregating the data when it comes from cluster
sampling. Used only when type = "z". Defaults to aggregate =
FALSE. See ‘Details’, item ‘Data aggregation’.
Rounding level of the data in the output data frame.
An object of class data.frame ready to be used to feed the
argument data.cont when creating a spsurvey.analysis object.
Two types of validation data that can be submitted to function
gcpDiff(): those coming from horizontal (positional) validation
exercises (type = "xy"), and those coming from vertical validation
exercises (type = "z").
Horizontal (positional) validation exercises compare the position of
measured point data with the position of predicted point data.
Horizontal displacement (error) is measured in both ‘x’ and
‘y’ coordinates, and is used to calculate the error vector (module)
and its azimuth. Both objects measured and predicted used
with function gcpDiff() must be of class
SpatialPointsDataFrame. They must have at least one column named
‘siteID’ giving the identification of every case. Matching of case
IDs is mandatory. Other columns are discarded.
Vertical validation exercises are interested in comparing the
measured value of a variable at a given location with that
predicted by some model. In this case, error statistics are
calculated only for the the vertical displacement (error) in the ‘z’
coordinate. Both objects measured and predicted used with
function gcpDiff() must be of class SpatialPointsDataFrame.
They also must have a column named ‘siteID’ giving the identification
of every case. Again, matching of case IDs is mandatory. However, both
objects must have a column named ‘z’ which contains the values of the
‘z’ coordinate. Other columns are discarded.
Validation is sometimes performed using cluster or transect sampling. Before
estimation of error statistics, the data needs to be aggregated by cluster
or transect. The function gcpDiff() aggregates validation data of
type = "z" calculating the mean value per cluster. Thus, aggregation
can only be properly done if the ‘siteID’ column of both objects
measured and predicted provides the identification of
clusters. Setting aggregate = TRUE will return aggregated estimates
of error statistics. If the data has been aggregated beforehand, the
parameter aggregate can be set to FALSE.
There are circumstances in which the number of cases in the object
measured is larger than that in the object predicted. The
function gcpDiff() compares the number of cases in both objects and
automatically drops those cases of object measured that do not match
the cases of object predicted. However, case matching can only be
done if case IDs are exactly the same for both objects. Otherwise, estimated
error statistics will have no meaning at all.
Kincaid, T. M. and Olsen, A. R. (2013). spsurvey: Spatial Survey Design and Analysis. R package version 2.6. URL: http://www.epa.gov/nheerl/arm/.
# NOT RUN {
# }
# NOT RUN {
if (require(spsurvey)) {
## Create an spsurvey.analysis object
my.spsurvey <-
spsurvey.analysis(design = coordenadas(my.data),
data.cont = delta(ref.data, my.data),
popcorrect = TRUE, pcfsize = length(my.data$id),
support = rep(1, length(my.data$id)),
wgt = rep(1, length(my.data$id)), vartype = "SRS")
}
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab