Feature Transformation -- ChiSqSelector (Estimator)

Chi-Squared feature selection, which selects categorical features to use for predicting a categorical label

ft_chisq_selector(x, features_col = "features", output_col = NULL,
  label_col = "label", selector_type = "numTopFeatures", fdr = 0.05,
  fpr = 0.05, fwe = 0.05, num_top_features = 50, percentile = 0.1,
  dataset = NULL, uid = random_string("chisq_selector_"), ...)

A spark_connection, ml_pipeline, or a tbl_spark.


Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by ft_r_formula.


The name of the output column.


Label column name. The column should be a numeric column. Usually this column is output by ft_r_formula.


(Spark 2.1.0+) The selector type of the ChisqSelector. Supported options: "numTopFeatures" (default), "percentile", "fpr", "fdr", "fwe".


(Spark 2.2.0+) The upper bound of the expected false discovery rate. Only applicable when selector_type = "fdr". Default value is 0.05.


(Spark 2.1.0+) The highest p-value for features to be kept. Only applicable when selector_type= "fpr". Default value is 0.05.


(Spark 2.2.0+) The upper bound of the expected family-wise error rate. Only applicable when selector_type = "fwe". Default value is 0.05.


Number of features that selector will select, ordered by ascending p-value. If the number of features is less than num_top_features, then this will select all features. Only applicable when selector_type = "numTopFeatures". The default value of num_top_features is 50.


(Spark 2.1.0+) Percentile of features that selector will select, ordered by statistics value descending. Only applicable when selector_type = "percentile". Default value is 0.1.


(Optional) A tbl_spark. If provided, eagerly fit the (estimator) feature "transformer" against dataset. See details.


A character string used to uniquely identify the feature transformer.


Optional arguments; currently unused.


When dataset is provided for an estimator transformer, the function internally calls ml_fit() against dataset. Hence, the methods for spark_connection and ml_pipeline will then return a ml_transformer and a ml_pipeline with a ml_transformer appended, respectively. When x is a tbl_spark, the estimator will be fit against dataset before transforming x.

When dataset is not specified, the constructor returns a ml_estimator, and, in the case where x is a tbl_spark, the estimator fits against x then to obtain a transformer, which is then immediately used to transform x.


The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns a ml_transformer, a ml_estimator, or one of their subclasses. The object contains a pointer to a Spark Transformer or Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the transformer or estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, a transformer is constructed then immediately applied to the input tbl_spark, returning a tbl_spark

See Also

See http://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark.

Other feature transformers: ft_binarizer, ft_bucketizer, ft_count_vectorizer, ft_dct, ft_elementwise_product, ft_feature_hasher, ft_hashing_tf, ft_idf, ft_imputer, ft_index_to_string, ft_interaction, ft_lsh, ft_max_abs_scaler, ft_min_max_scaler, ft_ngram, ft_normalizer, ft_one_hot_encoder, ft_pca, ft_polynomial_expansion, ft_quantile_discretizer, ft_r_formula, ft_regex_tokenizer, ft_sql_transformer, ft_standard_scaler, ft_stop_words_remover, ft_string_indexer, ft_tokenizer, ft_vector_assembler, ft_vector_indexer, ft_vector_slicer, ft_word2vec

  • ft_chisq_selector
Documentation reproduced from package sparklyr, version 0.9.2, License: Apache License 2.0 | file LICENSE

Community examples

Looks like there are no examples yet.