The col_schema_match()
validation function, the expect_col_schema_match()
expectation function, and the test_col_schema_match()
test function all
work in conjunction with a col_schema
object (generated through the
col_schema()
function) to determine whether the expected schema matches
that of the target table. The validation function can be used directly on a
data table or with an agent object (technically, a ptblank_agent
object)
whereas the expectation and test functions can only be used with a data
table. The types of data tables that can be used include data frames,
tibbles, database tables (tbl_dbi
), and Spark DataFrames (tbl_spark
).
The validation step or expectation operates over a single test unit, which is
whether the schema matches that of the table (within the constraints enforced
by the complete
, in_order
, and is_exact
options). If the target table
is a tbl_dbi
or a tbl_spark
object, we can choose to validate the column
schema that is based on R column types (e.g., "numeric"
, "character"
,
etc.), SQL column types (e.g., "double"
, "varchar"
, etc.), or Spark SQL
types (e.g,. "DoubleType"
, "StringType"
, etc.). That option is defined in
the col_schema()
function (it is the .db_col_types
argument).
There are options to make schema checking less stringent (by default, this
validation operates with highest level of strictness). With the complete
option set to FALSE
, we can supply a col_schema
object with a partial
inclusion of columns. Using in_order
set to FALSE
means that there is no
requirement for the columns defined in the schema
object to be in the same
order as in the target table. Finally, the is_exact
option set to FALSE
means that all column classes/types don't have to be provided for a
particular column. It can even be NULL
, skipping the check of the column
type.
col_schema_match(
x,
schema,
complete = TRUE,
in_order = TRUE,
is_exact = TRUE,
actions = NULL,
step_id = NULL,
label = NULL,
brief = NULL,
active = TRUE
)expect_col_schema_match(
object,
schema,
complete = TRUE,
in_order = TRUE,
is_exact = TRUE,
threshold = 1
)
test_col_schema_match(
object,
schema,
complete = TRUE,
in_order = TRUE,
is_exact = TRUE,
threshold = 1
)
For the validation function, the return value is either a
ptblank_agent
object or a table object (depending on whether an agent
object or a table was passed to x
). The expectation function invisibly
returns its input but, in the context of testing data, the function is
called primarily for its potential side-effects (e.g., signaling failure).
The test function returns a logical value.
A data frame, tibble (tbl_df
or tbl_dbi
), Spark DataFrame
(tbl_spark
), or, an agent object of class ptblank_agent
that is
created with create_agent()
.
A table schema of type col_schema
which can be generated
using the col_schema()
function.
A requirement to account for all table columns in the
provided schema
. By default, this is TRUE
and so that all column names
in the target table must be present in the schema object. This restriction
can be relaxed by using FALSE
, where we can provide a subset of table
columns in the schema.
A stringent requirement for enforcing the order of columns in
the provided schema
. By default, this is TRUE
and the order of columns
in both the schema and the target table must match. By setting to FALSE
,
this strict order requirement is removed.
Determines whether the check for column types should be exact
or even performed at all. For example, columns in R data frames may have
multiple classes (e.g., a date-time column can have both the "POSIXct"
and the "POSIXt"
classes). If using is_exact == FALSE
, the column type
in the user-defined schema for a date-time value can be set as either
"POSIXct"
or "POSIXt"
and pass validation (with this column, at
least). This can be taken a step further and using NULL
for a column type
in the user-defined schema will skip the validation check of a column type.
By default, is_exact
is set to TRUE
.
A list containing threshold levels so that the validation step
can react accordingly when exceeding the set levels. This is to be created
with the action_levels()
helper function.
One or more optional identifiers for the single or multiple
validation steps generated from calling a validation function. The use of
step IDs serves to distinguish validation steps from each other and provide
an opportunity for supplying a more meaningful label compared to the step
index. By default this is NULL
, and pointblank will automatically
generate the step ID value (based on the step index) in this case. One or
more values can be provided, and the exact number of ID values should (1)
match the number of validation steps that the validation function call will
produce (influenced by the number of columns
provided), (2) be an ID
string not used in any previous validation step, and (3) be a vector with
unique values.
An optional label for the validation step. This label appears in the agent report and for the best appearance it should be kept short.
An optional, text-based description for the validation step. If
nothing is provided here then an autobrief is generated by the agent,
using the language provided in create_agent()
's lang
argument (which
defaults to "en"
or English). The autobrief incorporates details of the
validation step so it's often the preferred option in most cases (where a
label
might be better suited to succinctly describe the validation).
A logical value indicating whether the validation step should
be active. If the validation function is working with an agent, FALSE
will make the validation step inactive (still reporting its presence and
keeping indexes for the steps unchanged). If the validation function will
be operating directly on data (no agent involvement), then any step with
active = FALSE
will simply pass the data through with no validation
whatsoever. Aside from a logical vector, a one-sided R formula using a
leading ~
can be used with .
(serving as the input data table) to
evaluate to a single logical value. With this approach, the pointblank
function has_columns()
can be used to determine whether to make a
validation step active on the basis of one or more columns existing in the
table (e.g., ~ . %>% has_columns(vars(d, e))
). The default for active
is TRUE
.
A data frame, tibble (tbl_df
or tbl_dbi
), or Spark
DataFrame (tbl_spark
) that serves as the target table for the expectation
function or the test function.
A simple failure threshold value for use with the
expectation (expect_
) and the test (test_
) function variants. By
default, this is set to 1
meaning that any single unit of failure in data
validation results in an overall test failure. Whole numbers beyond 1
indicate that any failing units up to that absolute threshold value will
result in a succeeding testthat test or evaluate to TRUE
. Likewise,
fractional values (between 0
and 1
) act as a proportional failure
threshold, where 0.15
means that 15 percent of failing test units results
in an overall test failure.
Often, we will want to specify actions
for the validation. This argument,
present in every validation function, takes a specially-crafted list object
that is best produced by the action_levels()
function. Read that function's
documentation for the lowdown on how to create reactions to above-threshold
failure levels in validation. The basic gist is that you'll want at least a
single threshold level (specified as either the fraction of test units
failed, or, an absolute value), often using the warn_at
argument. Using
action_levels(warn_at = 1)
or action_levels(stop_at = 1)
are good choices
depending on the situation (the first produces a warning, the other
stop()
s).
Want to describe this validation step in some detail? Keep in mind that this
is only useful if x
is an agent. If that's the case, brief
the agent
with some text that fits. Don't worry if you don't want to do it. The
autobrief protocol is kicked in when brief = NULL
and a simple brief will
then be automatically generated.
A pointblank agent can be written to YAML with yaml_write()
and the
resulting YAML can be used to regenerate an agent (with yaml_read_agent()
)
or interrogate the target table (via yaml_agent_interrogate()
). When
col_schema_match()
is represented in YAML (under the top-level steps
key
as a list member), the syntax closely follows the signature of the validation
function. Here is an example of how a complex call of col_schema_match()
as
a validation step is expressed in R code and in the corresponding YAML
representation.
# R statement
agent %>%
col_schema_match(
schema = col_schema(
a = "integer",
b = "character"
),
complete = FALSE,
in_order = FALSE,
is_exact = FALSE,
actions = action_levels(stop_at = 1),
label = "The `col_schema_match()` step.",
active = FALSE
)# YAML representation
steps:
- col_schema_match:
schema:
a: integer
b: character
complete: false
in_order: false
is_exact: false
actions:
stop_count: 1.0
label: The `col_schema_match()` step.
active: false
In practice, both of these will often be shorter as only the schema
argument requires a value. Arguments with default values won't be written to
YAML when using yaml_write()
(though it is acceptable to include them with
their default when generating the YAML by other means). It is also possible
to preview the transformation of an agent to YAML without any writing to disk
by using the yaml_agent_string()
function.
2-30
Other validation functions:
col_exists()
,
col_is_character()
,
col_is_date()
,
col_is_factor()
,
col_is_integer()
,
col_is_logical()
,
col_is_numeric()
,
col_is_posix()
,
col_vals_between()
,
col_vals_decreasing()
,
col_vals_equal()
,
col_vals_expr()
,
col_vals_gte()
,
col_vals_gt()
,
col_vals_in_set()
,
col_vals_increasing()
,
col_vals_lte()
,
col_vals_lt()
,
col_vals_make_set()
,
col_vals_make_subset()
,
col_vals_not_between()
,
col_vals_not_equal()
,
col_vals_not_in_set()
,
col_vals_not_null()
,
col_vals_null()
,
col_vals_regex()
,
col_vals_within_spec()
,
conjointly()
,
row_count_match()
,
rows_complete()
,
rows_distinct()
,
serially()
,
specially()
,
tbl_match()
# For all examples here, we'll use
# a simple table with two columns:
# one `integer` (`a`) and the other
# `character` (`b`); the following
# examples will validate that the
# table columns abides match a schema
# object as created by `col_schema()`
tbl <-
dplyr::tibble(
a = 1:5,
b = letters[1:5]
)
tbl
# Create a column schema object with
# the helper function `col_schema()`
# that describes the columns and
# their types (in the expected order)
schema_obj <-
col_schema(
a = "integer",
b = "character"
)
# A: Using an `agent` with validation
# functions and then `interrogate()`
# Validate that the schema object
# `schema_obj` exactly defines
# the column names and column types
agent <-
create_agent(tbl) %>%
col_schema_match(schema_obj) %>%
interrogate()
# Determine if this validation
# had no failing test units (there is
# a single test unit governed by
# whether there is a match)
all_passed(agent)
# Calling `agent` in the console
# prints the agent's report; but we
# can get a `gt_tbl` object directly
# with `get_agent_report(agent)`
# B: Using the validation function
# directly on the data (no `agent`)
# This way of using validation functions
# acts as a data filter: data is passed
# through but should `stop()` if there
# is a single test unit failing; the
# behavior of side effects can be
# customized with the `actions` option
tbl %>% col_schema_match(schema_obj)
# C: Using the expectation function
# With the `expect_*()` form, we would
# typically perform one validation at a
# time; this is primarily used in
# testthat tests
expect_col_schema_match(tbl, schema_obj)
# D: Using the test function
# With the `test_*()` form, we should
# get a single logical value returned
# to us
tbl %>% test_col_schema_match(schema_obj)
Run the code above in your browser using DataLab