wrapper to path_allowed
paths_allowed(paths = "/", domain = "auto", bot = "*",
user_agent = utils::sessionInfo()$R.version$version.string,
check_method = c("robotstxt", "spiderbar"), warn = TRUE, force = FALSE,
ssl_verifypeer = c(1, 0), use_futures = TRUE, robotstxt_list = NULL)
paths for which to check bot's permission, defaults to "/"
Domain for which paths should be checked. Defaults to "auto". If set to "auto" function will try to guess the domain by parsing the paths argument. Note however, that these are educated guesses which might utterly fail. To be on the save side, provide appropriate domains manually.
name of the bot, defaults to "*"
HTTP user-agent string to be used to retrieve robots.txt file from domain
which method to use for checking -- either
"robotstxt" for the package's own method or "spiderbar"
for using spiderbar::can_fetch; note that at the current
state spiderbar is considered less accurate: the spiderbar
algorithm will only take into consideration rules for *
or a particular bot but does not merge rules together
(see: paste0(system.file("robotstxts", package = "robotstxt"),"/selfhtml_Example.txt")
)
warn about being unable to download domain/robots.txt because of
if TRUE instead of using possible cached results the function will re-download the robotstxt file HTTP response status 404. If this happens,
analog to CURL option https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html -- and might help with robots.txt file retrieval in some cases
Should future::future_lapply be used for possible parallel/async retrieval or not. Note: check out help pages and vignettes of package future on how to set up plans for future execution because the robotstxt package does not do it on its own.
either NULL -- the default -- or a list of character vectors with one vector per path to check