Word2Vec transforms a word into a code for further natural language processing or machine learning process.
ft_word2vec(x, input_col = NULL, output_col = NULL,
  vector_size = 100, min_count = 5, max_sentence_length = 1000,
  num_partitions = 1, step_size = 0.025, max_iter = 1, seed = NULL,
  uid = random_string("word2vec_"), ...)ml_find_synonyms(model, word, num)
A spark_connection, ml_pipeline, or a tbl_spark.
The name of the input column.
The name of the output column.
The dimension of the code that you want to transform from words. Default: 100
The minimum number of times a token must appear to be included in the word2vec model's vocabulary. Default: 5
(Spark 2.0.0+) Sets the maximum length (in words) of each sentence
in the input data. Any sentence longer than this threshold will be divided into
chunks of up to max_sentence_length size. Default: 1000
Number of partitions for sentences of words. Default: 1
Param for Step size to be used for each iteration of optimization (> 0).
The maximum number of iterations to use.
A random seed. Set this value if you need your results to be reproducible across repeated calls.
A character string used to uniquely identify the feature transformer.
Optional arguments; currently unused.
A fitted Word2Vec model, returned by ft_word2vec().
A word, as a length-one character vector.
Number of words closest in similarity to the given word to find.
The object returned depends on the class of x.
spark_connection: When x is a spark_connection, the function returns a ml_transformer,
  a ml_estimator, or one of their subclasses. The object contains a pointer to
  a Spark Transformer or Estimator object and can be used to compose
  Pipeline objects.
ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with
  the transformer or estimator appended to the pipeline.
tbl_spark: When x is a tbl_spark, a transformer is constructed then
  immediately applied to the input tbl_spark, returning a tbl_spark
ml_find_synonyms() returns a DataFrame of synonyms and cosine similarities
In the case where x is a tbl_spark, the estimator fits against x
  to obtain a transformer, which is then immediately used to transform x, returning a tbl_spark.
See http://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark.
Other feature transformers: ft_binarizer,
  ft_bucketizer,
  ft_chisq_selector,
  ft_count_vectorizer, ft_dct,
  ft_elementwise_product,
  ft_feature_hasher,
  ft_hashing_tf, ft_idf,
  ft_imputer,
  ft_index_to_string,
  ft_interaction, ft_lsh,
  ft_max_abs_scaler,
  ft_min_max_scaler, ft_ngram,
  ft_normalizer,
  ft_one_hot_encoder_estimator,
  ft_one_hot_encoder, ft_pca,
  ft_polynomial_expansion,
  ft_quantile_discretizer,
  ft_r_formula,
  ft_regex_tokenizer,
  ft_sql_transformer,
  ft_standard_scaler,
  ft_stop_words_remover,
  ft_string_indexer,
  ft_tokenizer,
  ft_vector_assembler,
  ft_vector_indexer,
  ft_vector_slicer