This function creates a transformer configuration based on the BERT base architecture and a vocabulary based on WordPiece by using the python libraries 'transformers' and 'tokenizers'.
create_bert_model(
ml_framework = aifeducation_config$get_framework(),
model_dir,
vocab_raw_texts = NULL,
vocab_size = 30522,
vocab_do_lower_case = FALSE,
max_position_embeddings = 512,
hidden_size = 768,
num_hidden_layer = 12,
num_attention_heads = 12,
intermediate_size = 3072,
hidden_act = "gelu",
hidden_dropout_prob = 0.1,
attention_probs_dropout_prob = 0.1,
sustain_track = TRUE,
sustain_iso_code = NULL,
sustain_region = NULL,
sustain_interval = 15,
trace = TRUE,
pytorch_safetensors = TRUE
)This function does not return an object. Instead the configuration and the vocabulary of the new model are saved on disk.
string Framework to use for training and inference.
ml_framework="tensorflow" for 'tensorflow' and ml_framework="pytorch"
for 'pytorch'.
string Path to the directory where the model should be saved.
vector containing the raw texts for creating the
vocabulary.
int Size of the vocabulary.
bool TRUE if all words/tokens should be lower case.
int Number of maximal position embeddings. This parameter
also determines the maximum length of a sequence which can be processed with the model.
int Number of neurons in each layer. This parameter determines the
dimensionality of the resulting text embedding.
int Number of hidden layers.
int Number of attention heads.
int Number of neurons in the intermediate layer of
the attention mechanism.
string name of the activation function.
double Ratio of dropout.
double Ratio of dropout for attention
probabilities.
bool If TRUE energy consumption is tracked
during training via the python library codecarbon.
string ISO code (Alpha-3-Code) for the country. This variable
must be set if sustainability should be tracked. A list can be found on
Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes.
Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html
integer Interval in seconds for measuring power
usage.
bool TRUE if information about the progress should be
printed to the console.
bool If TRUE a 'pytorch' model
is saved in safetensors format. If FALSE or 'safetensors' not available
it is saved in the standard pytorch format (.bin). Only relevant for pytorch models.
Devlin, J., Chang, M.‑W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In J. Burstein, C. Doran, & T. Solorio (Eds.), Proceedings of the 2019 Conference of the North (pp. 4171--4186). Association for Computational Linguistics. tools:::Rd_expr_doi("10.18653/v1/N19-1423")
Hugging Face documentation https://huggingface.co/docs/transformers/model_doc/bert#transformers.TFBertForMaskedLM
Other Transformer:
create_deberta_v2_model(),
create_funnel_model(),
create_longformer_model(),
create_roberta_model(),
train_tune_bert_model(),
train_tune_deberta_v2_model(),
train_tune_funnel_model(),
train_tune_longformer_model(),
train_tune_roberta_model()