downloading robots.txt file
get_robotstxt(domain, warn = TRUE, force = FALSE,
user_agent = utils::sessionInfo()$R.version$version.string,
ssl_verifypeer = c(1, 0))
domain from which to download robots.txt file
warn about being unable to download domain/robots.txt because of
if TRUE instead of using possible cached results the function will re-download the robotstxt file HTTP response status 404. If this happens,
HTTP user-agent string to be used to retrieve robots.txt file from domain
analog to CURL option https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html -- and might help with robots.txt file retrieval in some cases