SparkR (version 2.4.6)

filter: Filter

Description

Filter the rows of a SparkDataFrame according to a given condition.

Usage

filter(x, condition)

where(x, condition)

# S4 method for SparkDataFrame,characterOrColumn filter(x, condition)

# S4 method for SparkDataFrame,characterOrColumn where(x, condition)

Arguments

x

A SparkDataFrame to be sorted.

condition

The condition to filter on. This may either be a Column expression or a string containing a SQL statement

Value

A SparkDataFrame containing only the rows that meet the condition.

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Other subsetting functions: select(), subset()

Examples

Run this code
# NOT RUN {
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
filter(df, "col1 > 0")
filter(df, df$col2 != "abcdefg")
# }

Run the code above in your browser using DataCamp Workspace