powered by
Scrape and download Excel xlsx files from a Web Page
xlsx_scrap(link, path = getwd(), askRobot = FALSE)
called for the side effect of downloading Excel xlsx files from a website
the link of the web page
the path where to save the Excel xlsx files. Defaults to the current directory
logical. Should the function ask the robots.txt if we're allowed or not to scrape the web page ? Default is FALSE.
if (FALSE) { xlsx_scrap( link = "https://www.rieter.com/investor-relations/results-and-presentations/financial-statements" ) }
Run the code above in your browser using DataLab