While keyword_clean has provided a robust way to lemmatize the keywords, the returned token
might not be the most common way to use.This function first gets the stem or lemma of
every keyword using stem_strings or lemmatize_strings from textstem package,
then find the most frequent form (if more than 1,randomly select one)
for each stem or lemma. Last, every keyword
would be replaced by the most frequent keyword which share the same stem or lemma with it.
When the `reduce_form` is set to "partof", then for non-unigrams in the same document,
if one non-unigram is the subset of another, then they would be merged into the shorter one,
which is considered to be more general (e.g. "time series" and "time series analysis" would be
merged into "time series" if they co-occur in the same document). This could reduce the redundant
information. This is only applied to multi-word phrases, because using it for one word would
oversimplify the token and cause information loss (therefore, "time series" and "time" would not be
merged into "time"). This is an advanced option that should be used with caution (A trade-off between
information generalization and detailed information retention).