(Deprecated) Base R6 class for Keras wrappers
Fits the state of the preprocessing layer to the data being passed
Instantiates the DenseNet architecture.
Instantiates the EfficientNetB0 architecture
Activation functions
Metric
(Deprecated) Base R6 class for Keras layers
(Deprecated) Base R6 class for Keras constraints
(Deprecated) Create a custom Layer
(Deprecated) Base R6 class for Keras callbacks
Instantiates the MobileNetV3Large architecture
Instantiates the Xception architecture
Keras backend tensor engine
MobileNet model architecture.
Instantiates the ResNet architecture
MobileNetV2 model architecture
VGG16 and VGG19 models for Keras.
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Instantiates a NASNet model.
Inception V3 model, with weights pre-trained on ImageNet.
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
Create a custom callback
callback_model_checkpoint
Save the model after every epoch.
callback_learning_rate_scheduler
Learning rate scheduler.
Bidirectional wrapper for RNNs
Callback that prints metrics to stdout.
Callback used to stream events to a server.
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
TensorBoard basic visualizations
Custom metric function
Weight constraints
Count the total number of scalars composing the weights.
(Deprecated) Create a Keras Wrapper
Clone a model instance.
CIFAR100 small image classification
callback_backup_and_restore
Callback to back up and restore the training state
compile.keras.engine.training.Model
Configure a Keras model for training
IMDB Movie reviews sentiment classification
MNIST database of handwritten digits
(Deprecated) Evaluates the model on a data generator.
Layer/Model configuration
Retrieve the next item from a generator
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Callback that streams epoch results to a csv file
fit.keras.engine.training.Model
Train a Keras model
(Deprecated) Fits the model on data yielded batch-by-batch by a generator.
Retrieve tensors for layers with multiple nodes
Downloads a file from a URL if it not already in the cache.
Stop training when a monitored quantity has stopped improving.
Create a Keras Layer wrapper
Create a Keras Layer
Fit image data generator internal statistics to some sample data.
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
Make a python class constructor
Make an Active Binding
Fashion-MNIST database of fashion articles
evaluate.keras.engine.training.Model
Evaluate a Keras model
Reuters newswire topics classification
Initializer that generates tensors initialized to a constant value.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
Retrieves a layer based on either its name (unique) or index.
Layer/Model weights as R arrays
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
He uniform variance scaling initializer.
initializer_lecun_uniform
LeCun uniform initializer.
Initializer that generates tensors initialized to 1.
Initializer that generates the identity matrix.
Keras implementation
Initializer that generates a random orthogonal matrix.
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
image_dataset_from_directory
Create a dataset from a directory
LeCun normal initializer.
Representation of HDF5 dataset to be used instead of an R array
Generate batches of image data with real-time data augmentation. The data will be
looped over (in batches).
Bitwise reduction (logical AND).
Bitwise reduction (logical OR).
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
Generates batches of augmented/normalized data from image data and labels
Loads an image into PIL format.
Destroys the current TF graph and creates a new one.
Initializer that generates tensors initialized to 0.
He normal initializer.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
Element-wise value clipping.
2D deconvolution (i.e. transposed convolution).
Check if Keras is Available
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
Batchwise dot product.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Element-wise absolute value.
Returns the value of more than one tensor variable.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Cast an array to the default Keras float type.
Returns the index of the minimum value along an axis.
Active Keras backend
Concatenates a list of tensors alongside the specified axis.
Creates a constant tensor.
3D convolution.
Depthwise 2D convolution with separable filters.
Multiplies 2 tensors (and/or variables) and returns a tensor .
Computes cos of x element-wise.
3D deconvolution (i.e. transposed convolution).
Default float type
Reduce elems using fn to combine them from left to right.
Default image data format convention ('channels_first' or 'channels_last').
Install TensorFlow and Keras, including all Python dependencies
Creates a 1D tensor containing a sequence of integers.
TF session to be used by the backend.
Retrieves the elements of indices indices
in the tensor reference
.
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
CIFAR10 small image classification
Sets the values of many tensor variables at once.
Selects x
in test phase, and alt
otherwise.
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
Freeze and unfreeze weights
Boston housing price regression dataset
3D array representation of images
Returns the static number of elements in a Keras variable or tensor.
Adds a bias vector to a tensor.
Casts a tensor to a different dtype and returns it.
1D convolution.
Fuzz factor used in numeric expressions.
Exponential linear unit.
Returns the index of the maximum value along an axis.
2D convolution.
Binary crossentropy between an output tensor and a target tensor.
Sets entries in x
to zero at random, while scaling the entire tensor.
Runs CTC loss algorithm on each batch element.
Returns whether x
is a placeholder.
Returns whether a tensor is a sparse tensor.
Maximum value in a tensor.
Map the function fn over the elements elems and return the outputs.
Returns the dtype of a Keras tensor or variable, as a string.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Element-wise inequality between two tensors.
Apply 2D conv with un-shared weights.
Adds a 1-sized dimension at index axis
.
Apply 1D conv with un-shared weights.
Element-wise exponential.
Element-wise truth value of (x <= y).
Returns a tensor with the same content as the input tensor.
Segment-wise linear approximation of sigmoid.
Cumulative product of the values in a tensor, alongside the specified axis.
Applies batch normalization on x given mean, var, beta and gamma.
Decodes the output of a softmax.
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Reduce elems using fn to combine them from right to left.
Instantiate an identity matrix and returns it.
Flatten a tensor.
Returns the gradients of variables
w.r.t. loss
.
Instantiates a Keras function
Returns the shape of a variable.
Returns whether x
is a symbolic tensor.
Element-wise log.
Mean of a tensor, alongside the specified axis.
Instantiates a placeholder tensor and returns it.
Element-wise maximum of two tensors.
2D Pooling.
Repeats a 2D tensor.
Instantiates an all-ones variable of the same shape as another tensor.
Permutes axes in a tensor.
Instantiates a variable with values drawn from a normal distribution.
Returns a tensor with uniform distribution of values.
Compute the moving average of a variable.
Reverse a tensor along the specified axes.
Element-wise square root.
Element-wise square.
Iterates over the time dimension of a tensor
Returns the number of axes in a tensor, as an integer.
Sets the learning phase to a fixed value.
Stacks a list of rank R
tensors into a rank R+1
tensor.
Sets the value of a variable, from an R array.
Removes a 1-dimension from the tensor at index axis
.
Returns whether the targets
are in the top k
predictions
.
Get the uid for the default graph.
Returns the value of a variable.
Cumulative sum of the values in a tensor, alongside the specified axis.
Element-wise truth value of (x > y).
Element-wise equality between two tensors.
Evaluates the value of a variable.
Element-wise truth value of (x >= y).
Minimum value in a tensor.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Returns whether x
is a Keras tensor.
Returns the shape of tensor or variable as a list of int or NULL entries.
Repeats the elements of a tensor along an axis.
Transposes a tensor and returns it.
Instantiates an all-zeros variable of the same shape as another tensor.
Instantiates an all-zeros variable and returns it.
Returns a tensor with truncated random normal distribution of values.
Returns the symbolic shape of a tensor or variable.
Element-wise sigmoid.
Multiplies the values in a tensor, alongside the specified axis.
Rectified linear unit.
Creates a tensor by tiling x
by n
.
3D Pooling.
Element-wise minimum of two tensors.
Converts a sparse tensor into a dense tensor and returns it.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Prints message
and the tensor value when evaluated.
k_manual_variable_initialization
Sets the manual variable initialization flag.
Element-wise exponentiation.
(Deprecated) Computes log(sum(exp(elements across dimensions of a tensor))).
Selects x
in train phase, and alt
otherwise.
Reset graph identifiers.
Returns the learning phase flag.
Softsign of a tensor.
Element-wise truth value of (x < y).
Computes the one-hot representation of an integer tensor.
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Instantiates an all-ones tensor variable and returns it.
Returns a tensor with random binomial distribution of values.
Resizes the images contained in a 4D tensor.
Resizes the volume contained in a 5D tensor.
Softmax of a tensor.
Update the value of x
to new_x
.
Keras Model
Dot-product attention layer, a.k.a. Luong-style attention
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Computes sin of x element-wise.
Pads the middle dimension of a 3D tensor.
Applies Alpha Dropout to the input.
Element-wise sign.
Layer that averages a list of inputs.
Returns a tensor with normal distribution of values.
Unstack rank R
tensor into a list of rank R-1
tensors.
Instantiates a variable and returns it.
Keras array object
Element-wise tanh.
Additive attention layer, a.k.a. Bahdanau-style attention
Variance of a tensor, alongside the specified axis.
Reshapes a tensor to the specified shape.
Softplus of a tensor.
Rectified Linear Unit activation function
Layer that concatenates a list of inputs.
Scaled Exponential Linear Unit.
Sum of the values in a tensor, alongside the specified axis.
Switches between two operations depending on a scalar value.
1D convolution layer (e.g. temporal convolution).
Update the value of x
by adding increment
.
Average pooling operation for 3D data (spatial or spatio-temporal).
Element-wise rounding to the closest integer.
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
2D convolution with separable filters.
Pads 5D tensor with zeros along the depth, height, width dimensions.
Pads the 2nd and 3rd dimensions of a 4D tensor.
3D convolution layer (e.g. spatial convolution over volumes).
Keras Model composed of a linear stack of layers
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
(Deprecated) Create a Keras custom model
Transposed 2D convolution layer (sometimes called Deconvolution).
Softmax activation function.
Apply an activation function to an output.
Exponential Linear Unit.
Update the value of x
by subtracting decrement
.
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
Layer that adds a list of inputs.
Standard deviation of a tensor, alongside the specified axis.
Add a densely-connected NN layer to an output
(Deprecated) Fast LSTM implementation backed by CuDNN . A preprocessing layer which encodes integer features.
Cropping layer for 1D input (e.g. temporal sequence).
R interface to Keras
Returns variables
but with zero gradient w.r.t. every other variable.
Convolutional LSTM.
3D Convolutional LSTM
Crop the central portion of the images to target height and width
(Deprecated) Fast GRU implementation backed by CuDNN . Cropping layer for 3D data (e.g. spatial or spatio-temporal).
Cropping layer for 2D input (e.g. picture).
Constructs a DenseFeatures.
Apply multiplicative 1-centered Gaussian noise.
Main Keras module
Average pooling for temporal data.
Apply additive zero-centered Gaussian noise.
Flattens an input
Turns positive integers (indexes) into dense vectors of fixed size.
Depthwise separable 2D convolution.
A preprocessing layer which buckets continuous features by ranges.
Average pooling operation for spatial data.
Transposed 1D convolution layer (sometimes called Deconvolution).
2D convolution layer (e.g. spatial convolution over images).
Transposed 3D convolution layer (sometimes called Deconvolution).
Depthwise 1D convolution
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
1D Convolutional LSTM
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
Applies Dropout to the input.
Layer that computes a dot product between samples in two tensors.
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
Gated Recurrent Unit - Cho et al.
Cell class for the GRU layer
A preprocessing layer which hashes and bins categorical features.
Input layer
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
A preprocessing layer which maps integer features to contiguous ranges.
Wraps arbitrary expression as a layer
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
Max pooling operation for spatial data.
Max pooling operation for temporal data.
Long Short-Term Memory unit - Hochreiter 1997.
Permute the dimensions of an input according to a given pattern
Layer that multiplies (element-wise) a list of inputs.
Randomly crop the images to target height and width
Cell class for the LSTM layer
Adjust the contrast of an image or images by a random factor
Masks a sequence by using a mask value to skip timesteps.
layer_multi_head_attention
MultiHeadAttention layer
Layer that computes the minimum (element-wise) a list of inputs.
A preprocessing layer which randomly adjusts brightness during training
A preprocessing layer which normalizes continuous features.
Max pooling operation for 3D data (spatial or spatio-temporal).
Layer that computes the maximum (element-wise) a list of inputs.
Randomly flip each image horizontally and vertically
A preprocessing layer which randomly zooms images during training.
Randomly vary the width of a batch of images during training
Randomly vary the height of a batch of images during training
Image resizing layer
Reshapes an output to a certain shape.
Randomly rotate each image
Randomly translate each image during training
Multiply inputs by scale
and adds offset
Repeats the input n times.
Fully-connected RNN where the output is to be fed back to input.
Separable 2D convolution.
Spatial 3D version of Dropout.
A preprocessing layer which maps string features to integer indices.
Cell class for SimpleRNN
Depthwise separable 1D convolution.
Spatial 2D version of Dropout.
Spatial 1D version of Dropout.
Wrapper allowing a stack of RNN cells to behave as a single cell
Base class for recurrent layers
Zero-padding layer for 1D input (e.g. temporal sequence).
Zero-padding layer for 3D data (spatial or spatio-temporal).
Upsampling layer for 2D inputs.
learning_rate_schedule_cosine_decay
A LearningRateSchedule that uses a cosine decay schedule
Upsampling layer for 1D inputs.
A preprocessing layer which maps text features to integer sequences.
Unit normalization layer
Upsampling layer for 3D inputs.
Zero-padding layer for 2D input (e.g. picture).
Layer that subtracts two inputs.
learning_rate_schedule_inverse_time_decay
A LearningRateSchedule that uses an inverse time decay schedule
Calculates how often predictions equal labels
learning_rate_schedule_piecewise_constant_decay
A LearningRateSchedule that uses a piecewise constant decay schedule
(Deprecated) loss_cosine_proximity
Generates a word rank-based probabilistic sampling table.
learning_rate_schedule_polynomial_decay
A LearningRateSchedule that uses a polynomial decay schedule
learning_rate_schedule_exponential_decay
A LearningRateSchedule that uses an exponential decay schedule
Loss functions
learning_rate_schedule_cosine_decay_restarts
A LearningRateSchedule that uses a cosine decay schedule with restarts
metric-or-Metric
Computes the categorical hinge metric between y_true
and y_pred
metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Approximates the AUC (Area under the curve) of the ROC or PR curves
Computes the cosine similarity between the labels and predictions
Calculates the number of false positives
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
Calculates how often predictions match binary labels
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
Calculates the number of false negatives
(Deprecated) metric_cosine_proximity
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
Computes the mean Intersection-Over-Union metric
Computes the hinge metric between y_true
and y_pred
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
metric_mean_squared_error
Computes the mean squared error between labels and predictions
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
Computes the logarithm of the hyperbolic cosine of the prediction error
Computes the (weighted) mean of the given values
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
Computes the element-wise (weighted) mean of the given tensors
metric_precision_at_recall
Computes best precision where recall is >= specified value
metric_recall_at_precision
Computes best recall where precision is >= specified value
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
Computes the Poisson metric between y_true
and y_pred
Wraps a stateless metric function with the Mean metric
Computes the recall of the predictions with respect to the labels
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
Computes the precision of the predictions with respect to the labels
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Computes the (weighted) sum of the given values
Calculates the number of true negatives
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Load a Keras model from the Saved Model format
Computes the squared hinge metric
Model configuration as JSON
(Deprecated) Export to Saved Model format
new_learning_rate_schedule_class
Create a new learning rate schedule type
Model configuration as YAML
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
Define new keras types
Calculates the number of true positives
Optimizer that implements the Adam algorithm
Optimizer that implements the Adadelta algorithm
(Deprecated) Replicates a model on different GPUs.
Optimizer that implements the Adagrad algorithm
Normalize a matrix or nd-array
Assign values to names
Pads sequences to the same length
Remove the last layer in a model
Optimizer that implements the Nadam algorithm
plot.keras.engine.training.Model
Plot a Keras model
Optimizer that implements the RMSprop algorithm
Gradient descent (with momentum) optimizer
Optimizer that implements the FTRL algorithm
Optimizer that implements the Adamax algorithm
plot.keras_training_history
Plot training history
Pipe operator
L1 and L2 regularization
Save/Load models using HDF5 files
(Deprecated) Generates probability or class probability predictions for the input samples.
predict.keras.engine.training.Model
Generate predictions from a Keras model
Save/Load models using SavedModel format
Reset the states for a layer
A regularizer that encourages input vectors to be orthogonal to each other
Returns predictions for a single batch of samples.
Objects exported from other packages
(Deprecated) Generates predictions for the input samples from a data generator.
Save a text tokenizer to an external file
text_dataset_from_directory
Generate a tf.data.Dataset
from text files in a directory
Generates skipgram word pairs.
summary.keras.engine.training.Model
Print a summary of a Keras model
sequential_model_input_layer
sequential_model_input_layer
Serialize a model to an R object
Convert a list of sequences into a matrix.
Save model weights in the SavedModel format
Save/Load model weights using HDF5 files
Converts a text to a sequence of indexes in a fixed-size hashing space.
timeseries_dataset_from_array
Creates a dataset of sliding windows over a timeseries provided as array
This layer wrapper allows to apply a layer to every temporal slice of an input
Text tokenization utility
Converts a class vector (integers) to binary class matrix.
Utility function for generating batches of temporal data.
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Convert a list of texts to a matrix.
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Convert text to a sequence of words (or tokens).
Transform each text in texts in a sequence of integers.
zip lists
Single gradient update or model evaluation over one batch of samples.
Provide a scope with mappings of names to custom objects
Select a Keras implementation and backend