Decoder implements a beam seach decoder that finds the token transcription. W maximizing:
flashlighttext::Decoder -> LexiconSeq2SeqDecoder
new()LexiconSeq2SeqDecoder$new(
options,
trie,
lm,
eos,
emitting_model_update_func,
max_output_length,
is_lm_token
)optionsa LexiconSeq2SeqDecoderOptions instance.
triea Trie instance
lma LM instance
eosan integer. The index representing the EOS.
emitting_model_update_funcan emittingModelUpdateFunc instance
max_output_lengthan integer. The maximum output length.
is_lm_tokena is_lm_token
LexiconSeq2SeqDecoder
n_hypothesis()LexiconSeq2SeqDecoder$n_hypothesis()int
clone()The objects of this class are cloneable with this method.
LexiconSeq2SeqDecoder$clone(deep = FALSE)deepWhether to make a deep clone.
AM(W) + lmWeight_ * log(P_lm(W)) + eosScore_ * |W_last == EOS|
where P_lm(W) is the language model score. The transcription W is constrained by a lexicon. The language model may operate at word-level (isLmToken = FALSE) or token-level (isLmToken = TRUE).