PWFSLSmoke (version 1.2.100)

monitor_performance: Calculate Monitor Prediction Performance

Description

This function uses confusion matrix analysis to calculate different measures of predictive performance for every timeseries found in predicted with respect to the observed values found in the single timeseries found in observed.

The requested metric is returned in a dataframe organized with one row per monitor, all available metrics are returned.

Usage

monitor_performance(predicted, observed, t1, t2, metric = NULL,
  FPCost = 1, FNCost = 1)

Arguments

predicted

ws_monitor object with predicted data

observed

ws_monitor object with observed data

t1

value used to classify predicted measurements

t2

threshold used to classify observed measurements

metric

confusion matrix metric to be used

FPCost

cost associated with false positives (type II error)

FNCost

cost associated with false negatives (type I error)

Value

Dataframe of monitors vs named measure of performance.

See Also

monitor_performanceMap

skill_confusionMatrix

Examples

Run this code
# NOT RUN {
# If daily avg data were the prediciton and Spokane were
# the observed, which WA State monitors had skill?
wa <- airnow_loadAnnual(2017) %>% monitor_subset(stateCodes='WA')
wa_dailyAvg <- monitor_dailyStatistic(wa, mean)
Spokane_dailyAvg <- monitor_subset(wa_dailyAvg, monitorIDs='530630021_01')
threshold <- AQI$breaks_24[4] # Unhealthy
performanceMetrics <- monitor_performance(wa_dailyAvg,
                                          Spokane_dailyAvg,
                                          threshold, threshold)
monitorIDs <- rownames(performanceMetrics)
mask <- performanceMetrics$heidkeSkill &
        !is.na(performanceMetrics$heidkeSkill)
skillfulIDs <- monitorIDs[mask]
skillful <- monitor_subset(wa_dailyAvg, monitorIDs=skillfulIDs)
monitor_leaflet(skillful)
# }

Run the code above in your browser using DataLab