Learn R Programming

searchAnalyzeR (version 0.1.0)

BenchmarkValidator: Benchmark Validation System

Description

A comprehensive validation framework for testing search strategies against established benchmark datasets across multiple domains.

Arguments

Fields

benchmarks

List of benchmark datasets with known relevant articles

Methods

new()

Initialize a new BenchmarkValidator instance

validate_strategy(search_strategy, benchmark_name)

Validate against specific benchmark

cross_domain_validation(search_strategy)

Test across multiple domains

sensitivity_analysis(base_strategy, parameter_ranges)

Parameter sensitivity testing

Public fields

benchmarks

List of benchmark datasets

Methods


Method new()

Creates a new BenchmarkValidator instance and loads benchmark datasets. This method is called automatically when creating a new validator with BenchmarkValidator$new().

Usage

BenchmarkValidator$new()

Returns

No return value, called for side effects (loading benchmarks) Add a custom benchmark dataset


Method add_benchmark()

Usage

BenchmarkValidator$add_benchmark(name, corpus, relevant_ids)

Arguments

name

Name of the benchmark

corpus

Data frame with article corpus

relevant_ids

Vector of relevant article IDs

Returns

No return value, called for side effects Validate search strategy against benchmarks


Method validate_strategy()

Usage

BenchmarkValidator$validate_strategy(search_strategy, benchmark_name = "all")

Arguments

search_strategy

Search strategy object

benchmark_name

Name of benchmark dataset

Returns

Validation results Validate against single benchmark (PUBLIC METHOD)


Method validate_single_benchmark()

Usage

BenchmarkValidator$validate_single_benchmark(search_strategy, benchmark_name)

Arguments

search_strategy

Search strategy object

benchmark_name

Name of benchmark dataset

Returns

Validation results Cross-domain validation


Method cross_domain_validation()

Usage

BenchmarkValidator$cross_domain_validation(search_strategy)

Arguments

search_strategy

Search strategy object

Returns

Cross-domain validation results Sensitivity analysis for search parameters


Method sensitivity_analysis()

Usage

BenchmarkValidator$sensitivity_analysis(base_strategy, parameter_ranges)

Arguments

base_strategy

Base search strategy

parameter_ranges

List of parameter ranges to test

Returns

Sensitivity analysis results


Method clone()

The objects of this class are cloneable with this method.

Usage

BenchmarkValidator$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Details

The BenchmarkValidator class provides tools for:

  • Cross-domain validation across medical, environmental, social science domains

  • Sensitivity analysis for search parameters

  • Statistical comparison of strategy performance

  • Reproducible benchmark testing

Examples

Run this code
# Create validator
validator <- BenchmarkValidator$new()

# Check available benchmarks
print(names(validator$benchmarks))

# Define search strategy
strategy <- list(
  terms = c("systematic review", "meta-analysis"),
  databases = c("PubMed", "Embase")
)

# Create sample data for validation
sample_data <- data.frame(
  id = paste0("art", 1:20),
  title = paste("Article", 1:20),
  abstract = paste("Abstract", 1:20),
  source = "Journal",
  date = Sys.Date()
)

# Add custom benchmark
validator$add_benchmark("custom", sample_data, paste0("art", 1:5))

# Validate against custom benchmark
results <- validator$validate_strategy(strategy, "custom")

Run the code above in your browser using DataLab