Learn R Programming

coalitions (version 0.6.12)

scrape_wahlrecht: Scrape surveys for German general election

Description

Scrapes survey tables and performs sanitation to output tidy data

Usage

scrape_wahlrecht(
  address = "https://www.wahlrecht.de/umfragen/emnid.htm",
  parties = c("CDU", "SPD", "GRUENE", "FDP", "LINKE", "PIRATEN", "FW", "AFD",
    "SONSTIGE")
)

scrape_by( address = "https://www.wahlrecht.de/umfragen/landtage/bayern.htm", parties = c("CSU", "SPD", "GRUENE", "FDP", "LINKE", "PIRATEN", "FW", "AFD", "SONSTIGE") )

scrape_ltw( address = "https://www.wahlrecht.de/umfragen/landtage/niedersachsen.htm", parties = c("CDU", "SPD", "GRUENE", "FDP", "LINKE", "PIRATEN", "FW", "AFD", "SONSTIGE"), ind_row_remove = -c(1:2) )

Arguments

address

http-address from which tables should be scraped.

parties

A character vector containing names of parties to collapse.

ind_row_remove

Negative vector of rows that will be skipped at the beginning.

Examples

Run this code
# NOT RUN {
library(coalitions)
library(dplyr)
# select a polling agency from .pollster_df that should be scraped ...
coalitions:::.pollster_df
# ... here we choose Forsa
address <- coalitions:::.pollster_df %>% filter(pollster == "forsa") %>% pull(address)
scrape_wahlrecht(address = address) %>% slice(1:5)
# }
# NOT RUN {
# Niedersachsen
scrape_ltw() %>% slice(1:5)
# Hessen
scrape_ltw("http://www.wahlrecht.de/umfragen/landtage/hessen.htm", ind_row_remove=-c(1)) %>%
 slice(1:5)
# }

Run the code above in your browser using DataLab