1. Filter Methods:
1a. Neighbordoohd Methods:
Neighborhood methods generally apply a convolution kernel smoother to one or both of the fields in the verificaiton set, and then apply the traditional scores. Most of the methods reviewe in Ebert (2008, 2009) are included in this package. The main functions are:
hoods2dPrep
, hoods2d
, pphindcast2d
, kernel2dsmooth
, and plot.hoods2d
.
1b. Scale Separation Methods:
Scale separation refers to the idea of applying a band-pass filter (and/or doing a multi-resolution analysis, MRA) to the verification set. Typically, skill is assessed on a scale-by-scale basis. However, other techniques are also applied. For example, denoising the field before applying traditional statistics, using the variogram, or applying a statistical test based on the variogram (these last are less similar to the spirit of the scale separation idea, but are at least somewhat related).
There is functionality to do the wavelet methods proposed in Briggs and Levine (1997). In particular, to simply denoise the fields before applying traditional verification statistics, use wavePurifyVx
. To apply verification statistics to detail fields (in either the wavelet or field space), use waverify2d
(dyadic fields) or mowaverify2d
(non-dyadic or dyadic) fields.
The intensity-scale technique introduced in Casati et al. (2004) and the new developments of the technique proposed in Casati (2010) can be performed with waveIS
.
Although not strictly a "scale separation" method, the structure function (for which the variogram is a special case) is in the same spirit in the sense that it analyzes the field for different separation distances, and these "scales" are separate from each other (i.e., the score does not necessarily improve or decline as the scale increases). This package contains essentially wrapper functions to the vgram.matrix
and plot.vgram.matrix
functions from teh fields package. It also has a function (variogram.matrix
) that is a modification of vgram.matrix
that allows for missing values. The primary function for this is called griddedVgram
, and it has a plot
method function.
2. Displacement Methods:
In Gilleland et al (2009), this category was broken into two main types as field deformation and features-based. The former lumped together binary image measures/metrics with field deformation techniques because the binary image measures inform about the "similarity" (or dissimilarity) between the spatial extent of two fields (across the entire field). Here, they are broken down further into those that yield a only a single (or small vector of) metric(s) or measure(s) (location measures), and those that have mechanisms for moving grid-point locations to match the fields better spatially (field deformation).
2a. Location Measures:
Included here are the well-known Hausdorff metric, the partial-Hausdorff measure, FQI (Venugopal et al., 2005), Baddeley's delta metric (Gilleland, 2011; Schwedler et al., 2011), metrV (Zhu et al., 2011), as well as the localization performance measures described in Baddeley, 1992: mean error distance, mean square error distance, and Pratt's Figure of Merit (FOM).
locmeasures2dPrep
, locmeasures2d
, metrV
, distob
, locperf
2b. Field deformation:
coming soon. An image warping pacakge is in the works, and will be released as a separate package, but included as a dependency by this package potentially with some wrapper functions specific to spatial forecast verification needs. Other field deformation aproaches are also in the hopper, and will be made available as soon as they are ready.
2c. Features-based methods: These methods are also sometimes called object-based methods, and have many similarities to techniques used in Object-Based Image Analysis (OBIA), a relatively new research area that has emerged primarily as a result of advances in earth observations sensors and GIScience (Blaschke et al., 2008). It is attempted to identify individual features within a field, and subsequently analyze the fields on a feature-by-feature basis. This may involve intensity error information in addition to location-specific error information. Additionally, contingency table verifcation statistics can be found using new definitions for hits, misses and false alarms (correct negatives are more difficult to asses, but can also be done).
Currently, there is some functionality for performing the analyses introduced in Davis et al. (2006,2009), including the merge/match algorithm of Gilleland et al (2008), as well as the SAL technique of Wernli et al (2008, 2009).
FeatureSuitePrep
, FeatureFunPrep
, FeatureSuite
, saller
, deltamm
, convthresh
, FeatureMatchAnalyzer
2d. Geometrical characterization measures:
Perhaps the measures in this sub-heading are best described as part-and-parcel of 2c. They are certainly useful in that domain, but have been proposed also for entire fields by AghaKouchak et al. (2011); though similar measures have been applied in, e.g., MODE. The measures introduced in AghaKouchak et al. (2011) available here are: connectivity index (Cindex), shape index (Sindex), and area index (Aindex). See the help files for each to learn more.
3. Statistical inferences for spatial (and/or spatiotemporal) fields:
In addition to the methods categorized in Gilleland et al. (2009), there are also functions for making comparisons between two spatial fields. Currently, this entails only one such method (more to come), whcih requires temporal information as well. It is the field significance approach detailed in Elmore et al. (2006). It involves using a circular block bootstrap (CBB) algorithm (usually for the mean error) at each grid point (or location) individually to determine grid-point significance (null hypothesis that the mean error is zero), and then a semi-parametric Monte Carlo method viz. Livezey and Chen (1983) to determine field significance.
spatbiasFS
, LocSig
, MCdof
In addition, the mean loss differential approach of Hering and Genton (2011) is included via the function, spatMLD
, including the loss functions: absolute error (abserrloss
), square error (sqerrloss
) and correlation skill (corrskill
), as well as the distance map loss function (distmaploss
) introduced in Gilleland (2012).
4. Other:
The bias corrected TS and ETS (or TS dHdA and ETS dHdA) introduced in Mesinger (2008) are now included within the vxstats
function.
Also included is a function to do the geographic box-plot of Willmott et al. (2007). See the help file for GeoBoxPlot
.
Datasets:
All of the initial Spatial Forecast Verification Inter-Comparison Project (ICP, http://www.ral.ucar.edu/projects/icp) data sets used in the special collection of the Weather and Forecasting journal are included. See the help file for 'obs0426', which will give information on all of the datasets included, as well as an example for plotting the data.
Ebert (2008) provides a nice review of these methods. Roberts and Lean (2008) describes one of the methods, as well as the primary boxcar kernel smoothing method used throughout this package. Gilleland et al. (2009, 2010) provides an overview of most of the various recently proposed methods, and Ahijevych et al. (2009) describes the data sets included in this package. Some of these have been applied to the ICP test cases in Ebert (2009).
Additionally, one of the NIMROD cases (as provided by the UK Met Office) analyzed in Casati et al (2004) (case 6) is included along with approximate lon/lat locations. See the help file for UKobs6 more information.
A spatio-temporal verification dataset is also included for testing the method of Elmore et al. (2006). See the help file for 'GFSNAMfcstEx' for more information.
Ahijevych, D., E. Gilleland, B.G. Brown, and E.E. Ebert, 2009. Application of spatial verification methods to idealized and NWP gridded precipitation forecasts. Wea. Forecasting, 24 (6), 1485--1497.
Baddeley, A. J., 1992: An error metric for binary images. In Robust Computer Vision Algorithms, W. Forstner and S. Ruwiedel, Eds., Wichmann, 59--78. Available at: http://www.google.com/url?sa=t&rct=j&q=an Blaschke, T., S. Lang, and G. Hay (Eds.), 2008. Object-Based Image Analysis. Springer-Verlag, Berlin, Germany, 818 pp.
Briggs, W. M. and R. A. Levine, 1997. Wavelets and field forecast verification. Mon. Wea. Rev., 125, 1329--1341.
Casati, B., 2010: New Developments of the Intensity-Scale Technique within the Spatial Verification Methods Inter-Comparison Project. Wea. Forecasting 25, (1), 113--143, DOI: 10.1175/2009WAF2222257.1.
Casati B, G Ross, and DB Stephenson, 2004. A new intensity-scale approach for the verification of spatial precipitation forecasts. Meteorol. Appl. 11, 141--154.
Davis CA, BG Brown, and RG Bullock, 2006. Object-based verification of precipitation forecasts, Part I: Methodology and application to mesoscale rain areas. Mon. Wea. Rev., 134, 1772--1784
Ebert EE, 2008. Fuzzy verification of high resolution gridded forecasts: A review and proposed framework. Meteorol. Appl., 15, 51--64. DOI: 10.1002/met.25 Available at http://www.ecmwf.int/newsevents/meetings/workshops/2007/jwgv/METspecialissueemail.pdf
Ebert, E. E., 2009: Neighborhood verification: A strategy for rewarding close forecasts. Wea. Forecasting, 24, 1498-1510, DOI: 10.1175/2009WAF2222251.1.
Elmore, K. L., M. E. Baldwin, and D. M. Schultz, 2006: Field significance revisited: Spatial bias errors in forecasts as applied to the Eta model. Mon. Wea. Rev., 134, 519--531.
Gilleland, E., 2012: Testing competing precipitation forecasts accurately and efficiently: The spatial prediction comparison test. Submitted to Mon. Wea. Rev.
Gilleland, E., 2011. Spatial forecast verification: Baddeley's delta metric applied to the ICP test cases. Wea. Forecasting, 26, 409--415, DOI: 10.1175/WAF-D-10-05061.1.
Gilleland, E., T. C. M. Lee, J. Halley Gotway, R. G. Bullock, and B. G. Brown, 2008: Computationally efficient spatial forecast verification using Baddeley's delta image metric. Mon. Wea. Rev., 136, 1747--1757.
Gilleland, E., D. Ahijevych, B.G. Brown, B. Casati, and E.E. Ebert, 2009. Intercomparison of Spatial Forecast Verification Methods. Wea. Forecasting, 24, 1416--1430, DOI: 10.1175/2009WAF2222269.1.
Gilleland, E., D.A. Ahijevych, B.G. Brown and E.E. Ebert, 2010: Verifying Forecasts Spatially. Bull. Amer. Meteor. Soc., October, 1365--1373.
Hering, A. S. and M. G. Genton, 2011: Comparing spatial predictions. Technometrics 53, (4), 414 - 425.
Livezey, R. E. and W. Y. Chen, 1983: Statistical field significance and its determination by Monte Carlo techniques. Mon. Wea. Rev., 111, 46--59.
Marzban, C. and S. Sandgathe, 2009: Verification with variograms. Wea. Forecasting, 24 (4), 1102--1120, doi: 10.1175/2009WAF2222122.1
Mesinger, F., 2008: Bias adjusted precipitation threat scores. Adv. Geosci., 16, 137--142.
Roberts, N. M. and H. W. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 78--97. DOI: 10.1175/2007MWR2123.1.
Schwedler, B. R. J. and M. E. Baldwin, 2011. Diagnosing the sensitivity of binary image measures to bias, location, and event frequency within a forecast verification framework. Wea. Forecasting, 26, 1032--1044, doi: 10.1175/WAF-D-11-00032.1.
Venugopal, V., S. Basu, and E. Foufoula-Georgiou, 2005: A new metric for comparing precipitation patterns with an application to ensemble forecasts. J. Geophys. Res., 110, D08111, doi:10.1029/2004JD005395, 11pp.
Wernli, H., M. Paulat, M. Hagen, and C. Frei, 2008. SAL--A novel quality measure for the verification of quantitative precipitation forecasts. Mon. Wea. Rev., 136, 4470--4487.
Wernli, H., C. Hofmann, and M. Zimmer, 2009. Spatial forecast verification methods intercomparison project: Application of the SAL technique. Wea. Forecasting, 24, 1472--1484, DOI: 10.1175/2009WAF2222271.1
Willmott, C. J., S. M. Robeson, and K. Matsuura, 2007. Geographic box plots. Physical Geography, 28, 331--344, DOI: 10.2747/0272-3646.28.4.331.
Zhu, M., V. Lakshmanan, P. Zhang, Y. Hong, K. Cheng, and S. Chen, 2011: Spatial verification using a true metric. Atmos. Res., 102, 408--419, doi:10.1016/j.atmosres.2011.09.004.
## The example below may take a few minutes to run,
## and therefore is commented out in order for R's
## package checks to run in a shorter amount of time.
## Compare the results to Ebert (2009) Fig. 5.
##
## Note that 'levels' refer to number of grid points,
## which for this example are ~4 km, so that the
## plot labels differ by a factor of 4 from Ebert (2009).
## Future versions of this package will allow for different
## labelling.
data( pert004)
data(pert000)
hold <- hoods2dPrep( "pert004", "pert000", thresholds=c(1, 2, 5, 10, 20, 50), levels=c(1, 3, 5, 9, 17, 33, 65, 129, 257), units="mm/h")
look <- hoods2d( hold, which.methods=c("fss", "multi.event"), verbose=TRUE)
plot( look)
look2 <- pphindcast2d( hold, verbose=TRUE)
look2
# Location measures.
data( geom000)
data(geom001)
hold <- locmeasures2dPrep("geom001", "geom000", thresholds=c(0.1,50.1), k=c(4,0.975), alpha=c(0.1,0.9), units="in/100")
hold2 <- locmeasures2d( hold)
summary( hold2)
Run the code above in your browser using DataLab