ft_count_vectorizer
Feature Tranformation -- CountVectorizer (Estimator)
Extracts a vocabulary from document collections.
Usage
ft_count_vectorizer(x, input_col, output_col, binary = FALSE, min_df = 1,
min_tf = 1, vocab_size = as.integer(2^18), dataset = NULL,
uid = random_string("count_vectorizer_"), ...)
Arguments
- x
A
spark_connection
,ml_pipeline
, or atbl_spark
.- input_col
The name of the input column.
- output_col
The name of the output column.
- binary
Binary toggle to control the output vector values. If
TRUE
, all nonzero counts (aftermin_tf
filter applied) are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. Default:FALSE
- min_df
Specifies the minimum number of different documents a term must appear in to be included in the vocabulary. If this is an integer greater than or equal to 1, this specifies the number of documents the term must appear in; if this is a double in [0,1), then this specifies the fraction of documents. Default: 1.
- min_tf
Filter to ignore rare words in a document. For each document, terms with frequency/count less than the given threshold are ignored. If this is an integer greater than or equal to 1, then this specifies a count (of times the term must appear in the document); if this is a double in [0,1), then this specifies a fraction (out of the document's token count). Default: 1.
- vocab_size
Build a vocabulary that only considers the top
vocab_size
terms ordered by term frequency across the corpus. Default:2^18
.- dataset
(Optional) A
tbl_spark
. If provided, eagerly fit the (estimator) feature "transformer" againstdataset
. See details.- uid
A character string used to uniquely identify the feature transformer.
- ...
Optional arguments; currently unused.
Details
When dataset
is provided for an estimator transformer, the function
internally calls ml_fit()
against dataset
. Hence, the methods for
spark_connection
and ml_pipeline
will then return a ml_transformer
and a ml_pipeline
with a ml_transformer
appended, respectively. When
x
is a tbl_spark
, the estimator will be fit against dataset
before
transforming x
.
When dataset
is not specified, the constructor returns a ml_estimator
, and,
in the case where x
is a tbl_spark
, the estimator fits against x
then
to obtain a transformer, which is then immediately used to transform x
.
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns aml_transformer
, aml_estimator
, or one of their subclasses. The object contains a pointer to a SparkTransformer
orEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the transformer or estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, a transformer is constructed then immediately applied to the inputtbl_spark
, returning atbl_spark
See Also
See http://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark.
Other feature transformers: ft_binarizer
,
ft_bucketizer
,
ft_chisq_selector
, ft_dct
,
ft_elementwise_product
,
ft_hashing_tf
, ft_idf
,
ft_imputer
,
ft_index_to_string
,
ft_interaction
, ft_lsh
,
ft_max_abs_scaler
,
ft_min_max_scaler
, ft_ngram
,
ft_normalizer
,
ft_one_hot_encoder
, ft_pca
,
ft_polynomial_expansion
,
ft_quantile_discretizer
,
ft_r_formula
,
ft_regex_tokenizer
,
ft_sql_transformer
,
ft_standard_scaler
,
ft_stop_words_remover
,
ft_string_indexer
,
ft_tokenizer
,
ft_vector_assembler
,
ft_vector_indexer
,
ft_vector_slicer
, ft_word2vec