Learn R Programming

⚠️There's a newer version (0.7.15) of this package.Take me there.

A 'robots.txt' Parser and 'Webbot'/'Spider'/'Crawler' Permissions Checker

Status

lines of R code: 455, lines of test code: 361

Development version

0.4.1 - 2017-08-27 / 08:42:33

Description

Provides functions to download and parse 'robots.txt' files. Ultimately the package makes it easy to check if bots (spiders, crawler, scrapers, ...) are allowed to access specific resources on a domain.

License

MIT + file LICENSE Peter Meissner [aut, cre], Oliver Keys [ctb], Rich Fitz John [ctb]

Citation

citation("robotstxt")

BibTex for citing

toBibtex(citation("robotstxt"))

Contribution - AKA The-Think-Twice-Be-Nice-Rule

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms:

As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.

We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.

Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team.

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers.

This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.0.0, available at http://contributor-covenant.org/version/1/0/0/

Installation

Installation and start - stable version

install.packages("robotstxt")
library(robotstxt)

Installation and start - development version

devtools::install_github("ropenscilabs/robotstxt")
library(robotstxt)

Usage

Robotstxt class documentation

?robotstxt

Simple path access right checking ...

library(robotstxt)

paths_allowed(
  paths  = c("/api/rest_v1/?doc", "/w/"), 
  domain = "wikipedia.org", 
  bot    = "*"
)
## [1]  TRUE FALSE

paths_allowed(
  paths = c(
    "https://wikipedia.org/api/rest_v1/?doc", 
    "https://wikipedia.org/w/"
  )
)
## [1]  TRUE FALSE

... or use it that way ...

library(robotstxt)

rtxt <- robotstxt(domain = "wikipedia.org")
rtxt$check(paths = c("/api/rest_v1/?doc", "/w/"), bot= "*")
## /api/rest_v1/?doc               /w/ 
##              TRUE             FALSE

More information

Have a look at the vignette at https://cran.r-project.org/package=robotstxt/vignettes/using_robotstxt.html

Copy Link

Version

Install

install.packages('robotstxt')

Monthly Downloads

1,926

Version

0.4.1

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Peter Meissner

Last Published

September 1st, 2017

Functions in robotstxt (0.4.1)

%>%

re-export magrittr pipe operator
print.robotstxt

printing robotstxt
print.robotstxt_text

printing robotstxt_text
remove_domain

function to remove domain from path
sanitize_permissions

transforming permissions into regular expressions (whole permission)
named_list

make automatically named list
parse_robotstxt

function parsing robots.txt
robotstxt

Generate a representations of a robots.txt file
rt_cache

get_robotstxt() cache
path_allowed

check if a bot has permissions to access page
paths_allowed

check if a bot has permissions to access page(s)
rt_get_comments

extracting comments from robots.txt
rt_get_fields

extracting permissions from robots.txt
guess_domain

function guessing domain from path
is_valid_robotstxt

function that checks if file is valid / parsable robots.txt file
rt_get_fields_worker

extracting robotstxt fields
rt_get_rtxt

load robots.txt files saved along with the package
get_robotstxt

downloading robots.txt file
get_robotstxt_http_get

get_robotstxt() worker function to execute HTTP request
rt_get_useragent

extracting HTTP useragents from robots.txt
rt_list_rtxt

list robots.txt files saved along with the package
sanitize_path

making paths uniform
sanitize_permission_values

transforming permissions into regular expressions (values)