Learn R Programming

aifeducation (version 1.1.2)

BaseModelModernBert: ModernBert

Description

Represents models based on Modern Bert.

Arguments

Value

Does return a new object of this class.

Super classes

aifeducation::AIFEMaster -> aifeducation::AIFEBaseModel -> aifeducation::BaseModelCore -> BaseModelModernBert

Methods

Inherited methods


Method configure()

Configures a new object of this class.

Usage

BaseModelModernBert$configure(
  tokenizer,
  max_position_embeddings = 512L,
  hidden_size = 768L,
  num_hidden_layers = 12L,
  num_attention_heads = 12L,
  global_attn_every_n_layers = 3L,
  intermediate_size = 3072L,
  hidden_activation = "GELU",
  embedding_dropout = 0.1,
  mlp_dropout = 0.1,
  attention_dropout = 0.1
)

Arguments

tokenizer

TokenizerBase Tokenizer for the model.

max_position_embeddings

int Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model. Allowed values: 10 <= x <= 4048

hidden_size

int Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding. Allowed values: 1 <= x <= 2048

num_hidden_layers

int Number of hidden layers. Allowed values: 1 <= x

num_attention_heads

int determining the number of attention heads for a self-attention layer. Only relevant if attention_type='multihead' Allowed values: 0 <= x

global_attn_every_n_layers

int Number determining to use a global attention every x-th layer. Allowed values: 2 <= x <= 36

intermediate_size

int determining the size of the projection layer within a each transformer encoder. Allowed values: 1 <= x

hidden_activation

string Name of the activation function. Allowed values: 'GELU', 'relu', 'silu', 'gelu_new'

embedding_dropout

double Dropout chance for the embeddings. Allowed values: 0 <= x <= 0.6

mlp_dropout

double Dropout rate for the mlp layer. Allowed values: 0 <= x <= 0.6

attention_dropout

double Ratio of dropout for attention probabilities. Allowed values: 0 <= x <= 0.6

Returns

Does nothing return.


Method clone()

The objects of this class are cloneable with this method.

Usage

BaseModelModernBert$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

References

Devlin, J., Chang, M.‑W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In J. Burstein, C. Doran, & T. Solorio (Eds.), Proceedings of the 2019 Conference of the North (pp. 4171--4186). Association for Computational Linguistics. tools:::Rd_expr_doi("10.18653/v1/N19-1423")

See Also

Other Base Model: BaseModelBert, BaseModelDebertaV2, BaseModelFunnel, BaseModelMPNet, BaseModelRoberta