
Last chance! 50% off unlimited learning
Sale ends in
update_snapshot()
makes it easy to create and update a historical data table on a remote (SQL) server.
The function takes the data (.data
) as it looks on a given point in time (timestamp
) and then updates
(or creates) an remote table identified by db_table
.
This update only stores the changes between the new data (.data
) and the data currently stored on the remote.
This way, the data can be reconstructed as it looked at any point in time while taking as little space as possible.
See vignette("basic-principles")
for further introduction to the function.
update_snapshot(
.data,
conn,
db_table,
timestamp,
filters = NULL,
message = NULL,
tic = Sys.time(),
logger = NULL,
enforce_chronological_order = TRUE,
collapse_continuous_records = FALSE
)
No return value, called for side effects.
(data.frame(1)
, tibble(1)
, data.table(1)
, or tbl_dbi(1)
)
Data object.
(DBIConnection(1)
)
Connection object.
(id-like object(1)
)
A table specification (coercible by id()
).
(POSIXct(1)
, Date(1)
, or character(1)
)
The timestamp describing the data being processed (not the current time).
(data.frame(1)
, tibble(1)
, data.table(1)
, or tbl_dbi(1)
)
A object subset data by.
If filters is NULL
, no filtering occurs.
Otherwise, an inner_join()
is performed using all columns of the filter object.
(character(1)
)
A message to add to the log-file (useful for supplying metadata to the log).
(POSIXct(1)
)
A timestamp when computation began. If not supplied, it will be created at call-time
(used to more accurately convey the runtime of the update process).
(Logger(1)
)
A configured logging object. If none is given, one is initialized with default arguments.
(logical(1)
)
Are updates allowed if they are chronologically earlier than latest update?
(logical(1)
)
Check for records where from/until time stamps are equal and delete?
Forced TRUE
when enforce_chronological_order
is FALSE
.
The most common use case is having consecutive snapshots of a dataset and wanting to store the changes between
them. If you have a special case where you want to insert data that is not consecutive, you can set the
enforce_chronological_order
to FALSE
. This will allow you to insert data that is earlier than the latest
time stamp.
If you have more updates in a single day and use Date()
rather than POSIXct()
, as your time stamp, you
may end up with records where from_ts
and until_ts
are equal. These records not normally accessible with
get_table()
and you may want to prevent these records using collapse_continuous_records = TRUE
.
filter_keys
if (FALSE) { # requireNamespace("RSQLite", quietly = TRUE)
conn <- get_connection()
data <- dplyr::copy_to(conn, mtcars)
# Copy the first 3 records
update_snapshot(
head(data, 3),
conn = conn,
db_table = "test.mtcars",
timestamp = Sys.time()
)
# Update with the first 5 records
update_snapshot(
head(data, 5),
conn = conn,
db_table = "test.mtcars",
timestamp = Sys.time()
)
dplyr::tbl(conn, "test.mtcars")
close_connection(conn)
}
Run the code above in your browser using DataLab