Summarizes the number of studies screened, which were identified to be included/excluded from the project, as well as those with conflicting agreement on their inclusion/exclusion. If a dual (paired) design was implemented to screen references, then it also provides inter-reviewer agreement estimate following Cohen (1960) that describes the agreement (or repeatability) of screening/coding decisions. The magnitudes of inter-reviewer agreement estimates are then interpreted following Landis & Koch (1977).
effort_summary(
aDataFrame,
column_reviewers = "REVIEWERS",
column_effort = "INCLUDE",
dual = FALSE,
quiet = FALSE
)
A data.frame containing the titles and abstracts that were
screened by a team. The default assumes that the data.frame is the
merged effort across the team using effort_merge
.
Changes the default label of the "REVIEWERS" column that contains the screening efforts of each team member.
Changes the default label of the "INCLUDE" column that contains the screening decisions (coded references) of each team member.
When TRUE
, provides a summary of the dual screening
effort as well as estimation of inter-reviewer agreements following
Cohen's (1960) kappa (K) and Landis and Koch's (1977) interpretation
benchmarks.
When TRUE
, does not print to console the summary table.
A data frame with summary information on the screening tasks of a reviewing team.
Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20: 37-46.
Landis, J.R., and Koch, G.G. 1977. The measurement of observer agreement for categorical data. Biometrics 33: 159-174.