
Last chance! 50% off unlimited learning
Sale ends in
A wrapper around the great jsonlite::parse_json
. The differences are:
expose argument bigint_as_char
with default TRUE
.
control how to handle NA
and NULL
.
simplifyDataFrame
, simplifyMatrix
, and flatten
default to FALSE
as
they are not very stable in many real world APIs. Use the
tibblify package
for a more robust conversion to a dataframe.
don't collapse strings but error instead if they have more than one element.
parse_json(
x,
.na = json_na_error(),
.null = NULL,
simplifyVector = TRUE,
simplifyDataFrame = FALSE,
simplifyMatrix = FALSE,
flatten = FALSE,
bigint_as_char = bigint_default(),
...
)
a scalar JSON character
Value to return if x
is NA
. By default an error of class
jsontools_error_na_json
is thrown.
Return the prototype of .null
if x
is NULL
or a zero length character
passed on
to jsonlite::parse_json
.
Parse big integers as character? The option
jsontools.bigint_as_char
is used as default.
A R object. The type depends on the input but is usually a list or a data frame.
To parse a vector of JSON use parse_json_vector
.
# NOT RUN {
# Parse escaped unicode
parse_json('{"city" : "Z\\u00FCrich"}')
# big integers
big_num <- "9007199254740993"
as.character(parse_json(big_num, bigint_as_char = FALSE))
as.character(parse_json(big_num, bigint_as_char = TRUE))
# NA error by default
try(parse_json(NA))
# ... but one can specify a default value
parse_json(NA, .na = data.frame(a = 1, b = 2))
# input of size 0
parse_json(NULL)
parse_json(character(), .null = data.frame(a = 1, b = 2))
# }
Run the code above in your browser using DataLab