Latent Semantic Scaling (LSS) is a semi-supervised algorithm for document scaling based on word embedding.
textmodel_lss(x, ...)# S3 method for dfm
textmodel_lss(
x,
seeds,
terms = NULL,
k = 300,
slice = NULL,
weight = "count",
cache = FALSE,
simil_method = "cosine",
engine = c("RSpectra", "irlba", "rsvd"),
auto_weight = FALSE,
include_data = FALSE,
group_data = FALSE,
verbose = FALSE,
...
)
# S3 method for fcm
textmodel_lss(
x,
seeds,
terms = NULL,
k = 50,
max_count = 10,
weight = "count",
cache = FALSE,
simil_method = "cosine",
engine = "rsparse",
auto_weight = FALSE,
verbose = FALSE,
...
)
# S3 method for tokens
textmodel_lss(
x,
seeds,
terms = NULL,
k = 200,
min_count = 5,
engine = "wordvector",
tolower = TRUE,
include_data = FALSE,
group_data = FALSE,
spatial = TRUE,
verbose = FALSE,
...
)
a dfm or fcm created by quanteda::dfm(), quanteda::fcm(),
quanteda::tokens or quanteda::tokens_xptr object.
additional arguments passed to the underlying engine.
a character vector or named numeric vector that contains seed words. If seed words contain "*", they are interpreted as glob patterns. See quanteda::valuetype.
a character vector or named numeric vector that specify words
for which polarity scores will be computed; if a numeric vector, words' polarity
scores will be weighted accordingly; if NULL, all the features in x except
those less frequent than min_count will be used.
the number of singular values requested to the SVD engine. Only used
when x is a dfm.
a number or indices of the components of word vectors used to
compute similarity; slice < k to further truncate word vectors; useful
for diagnosys and simulation.
weighting scheme passed to quanteda::dfm_weight(). Ignored
when engine = "rsparse".
if TRUE, save the result of SVD for next execution with identical
x and settings. Use the base::options(lss_cache_dir) to change the
location cache files to be save.
specifies method to compute similarity between features.
The value is passed to quanteda.textstats::textstat_simil(), "cosine" is
used otherwise.
select the engine to factorize x to generate word vectors.
If x is a dfm, RSpectra::svds(), irlba::irlba() or rsvd::rsvd().
If x is a fcm, rsparse::GloVe().
If x is a tokens (or tokens_xptr), wordvector::textmodel_word2vec().
automatically determine weights to approximate the polarity of terms to seed words. Deprecated.
if TRUE, the fitted model includes the dfm supplied as x.
if TRUE, apply dfm_group(x) before saving the dfm.
show messages if TRUE.
passed to x_max in rsparse::GloVe$new() where cooccurrence
counts are ceiled to this threshold. It should be changed according to the
size of the corpus. Used only when x is a fcm.
the minimum frequency of the words. Words less frequent than
this in x are removed before training.
if TRUE, lower-case all the words in the model.
[experimental] if FALSE, return a probabilistic model. See the details.
Latent Semantic Scaling (LSS) is a semisupervised document scaling
method. textmodel_lss() constructs word vectors from use-provided
documents (x) and weights words (terms) based on their semantic
proximity to seed words (seeds). Seed words are any known polarity words
(e.g. sentiment words) that users should manually choose. The required
number of seed words are usually 5 to 10 for each end of the scale.
If seeds is a named numeric vector with positive and negative values, a
bipolar model is construct; if seeds is a character vector, a
unipolar model. Usually bipolar models perform better in document
scaling because both ends of the scale are defined by the user.
A seed word's polarity score computed by textmodel_lss() tends to diverge
from its original score given by the user because it's score is affected
not only by its original score but also by the original scores of all other
seed words. If auto_weight = TRUE, the original scores are weighted
automatically using stats::optim() to minimize the squared difference
between seed words' computed and original scores. Weighted scores are saved
in seed_weighted in the object.
When x is a tokens or tokens_xptr object, wordvector::textmodel_word2vec
is called internally with type = "skip-gram" and other arguments passed via ....
If spatial = TRUE, it return a spatial model; otherwise a probabilistic model.
While the polarity scores of words are their cosine similarity to seed words in
spatial models, they are predicted probability that the seed words to occur in
their contexts. The probabilistic models are still experimental, so use them with caution.
Please visit the package website for examples.
Watanabe, Kohei. 2020. "Latent Semantic Scaling: A Semisupervised Text Analysis Technique for New Domains and Languages", Communication Methods and Measures. tools:::Rd_expr_doi("10.1080/19312458.2020.1832976").
Watanabe, Kohei. 2017. "Measuring News Bias: Russia's Official News Agency ITAR-TASS' Coverage of the Ukraine Crisis" European Journal of Communication. tools:::Rd_expr_doi("10.1177/0267323117695735").