AnalysisPipeline
or StreamingAnalysisPipeline
object from the file systemLoads the AnalysisPipeline
or StreamingAnalysisPipeline
object from the file system
loadPipeline(path, input = data.frame(), filePath = "")
the path at which the .Rds file containing the pipeline is located
(optional) data frame with which the pipeline object should be initialized
(optional) path where a dataset in .CSV format is present which is to be loaded
An AnalysisPipeline
or StreamingAnalysisPipeline
object, optinally initialized with the data frame provided
The AnalysisPipeline
or StreamingAnalysisPipeline
object is loaded into the file system from the file system
based on the path specified.
Optionally, the input
parameter can be provided to
initialize the AnalysisPipeline
or StreamingAnalysisPipeline
object with an R data frame
or Streaming Spark DataFrame (in case of StreamingAnalysisPipeline
object) present in the R session.
Another provided option, is to specify a filePath where the input dataset is present (in a .CSV format)
and the object will be initialized with this data frame. The filePath
parameter takes precedence over
input
parameter. This is applicable only from AnalysisPipeline
objects
Note - When a pipeline is loaded, the existing registry is overwritten with the registry saved with the pipeline
Other Package core functions: BaseAnalysisPipeline-class
,
MetaAnalysisPipeline-class
,
assessEngineSetUp
,
checkSchemaMatch
,
createPipelineInstance
,
exportAsMetaPipeline
,
generateOutput
,
genericPipelineException
,
getInput
, getLoggerDetails
,
getOutputById
,
getPipelinePrototype
,
getPipeline
, getRegistry
,
initDfBasedOnType
,
initialize,BaseAnalysisPipeline-method
,
loadMetaPipeline
,
loadPredefinedFunctionRegistry
,
loadRegistry
, prepExecution
,
registerFunction
,
savePipeline
, saveRegistry
,
setInput
, setLoggerDetails
,
updateObject
,
visualizePipeline
# NOT RUN {
library(analysisPipelines)
loadPipeline(path = "./pipeline.RDS")
# }
Run the code above in your browser using DataLab