(Deprecated) Base R6 class for Keras wrappers
Metric
Fits the state of the preprocessing layer to the data being passed
(Deprecated) Base R6 class for Keras constraints
(Deprecated) Base R6 class for Keras layers
(Deprecated) Base R6 class for Keras callbacks
Activation functions
Create a custom Layer
Instantiates the DenseNet architecture.
Instantiates the EfficientNetB0 architecture
MobileNet model architecture.
Instantiates the MobileNetV3Large architecture
compile.keras.engine.training.Model
Configure a Keras model for training
MobileNetV2 model architecture
Weight constraints
Inception V3 model, with weights pre-trained on ImageNet.
Callback used to stream events to a server.
Count the total number of scalars composing the weights.
TensorBoard basic visualizations
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Instantiates the ResNet architecture
Instantiates a NASNet model.
Bidirectional wrapper for RNNs
Create a Keras Layer
CIFAR10 small image classification
Instantiates the Xception architecture
Fits the model on data yielded batch-by-batch by a generator.
Fit image data generator internal statistics to some sample data.
CIFAR100 small image classification
Freeze and unfreeze weights
Retrieve the next item from a generator
Retrieve tensors for layers with multiple nodes
Layer/Model weights as R arrays
Callback that prints metrics to stdout.
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
Keras backend tensor engine
callback_model_checkpoint
Save the model after every epoch.
callback_learning_rate_scheduler
Learning rate scheduler.
Retrieves a layer based on either its name (unique) or index.
Create a Keras Layer wrapper
fit.keras.engine.training.Model
Train a Keras model
(Deprecated) Create a Keras Wrapper
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Callback that streams epoch results to a csv file
Generate batches of image data with real-time data augmentation. The data will be
looped over (in batches).
Custom metric function
Stop training when a monitored quantity has stopped improving.
Create a custom callback
VGG16 and VGG19 models for Keras.
Boston housing price regression dataset
Install TensorFlow and Keras, including all Python dependencies
image_dataset_from_directory
Create a dataset from a directory
Element-wise absolute value.
Check if Keras is Available
Make a python class constructor
He normal initializer.
He uniform variance scaling initializer.
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
evaluate.keras.engine.training.Model
Evaluate a Keras model
Clone a model instance.
Initializer that generates a random orthogonal matrix.
Evaluates the model on a data generator.
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
Fashion-MNIST database of fashion articles
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
IMDB Movie reviews sentiment classification
initializer_random_normal
Initializer that generates tensors with a normal distribution.
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
Initializer that generates tensors initialized to 0.
Reuters newswire topics classification
MNIST database of handwritten digits
Bitwise reduction (logical OR).
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
Layer/Model configuration
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
3D array representation of images
Downloads a file from a URL if it not already in the cache.
Loads an image into PIL format.
Initializer that generates the identity matrix.
Returns the index of the maximum value along an axis.
LeCun normal initializer.
Generates batches of augmented/normalized data from image data and labels
initializer_lecun_uniform
LeCun uniform initializer.
Make an Active Binding
Representation of HDF5 dataset to be used instead of an R array
Active Keras backend
Returns the value of more than one tensor variable.
Initializer that generates tensors initialized to 1.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Keras implementation
3D convolution.
Batchwise dot product.
Adds a bias vector to a tensor.
Creates a 1D tensor containing a sequence of integers.
Binary crossentropy between an output tensor and a target tensor.
Initializer that generates tensors initialized to a constant value.
Destroys the current TF graph and creates a new one.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Returns the index of the minimum value along an axis.
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
Runs CTC loss algorithm on each batch element.
Bitwise reduction (logical AND).
Casts a tensor to a different dtype and returns it.
Cast an array to the default Keras float type.
Concatenates a list of tensors alongside the specified axis.
1D convolution.
Element-wise value clipping.
Creates a constant tensor.
3D deconvolution (i.e. transposed convolution).
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Decodes the output of a softmax.
Computes cos of x element-wise.
Returns the static number of elements in a Keras variable or tensor.
Applies batch normalization on x given mean, var, beta and gamma.
Instantiates a Keras function
Cumulative product of the values in a tensor, alongside the specified axis.
Sets the values of many tensor variables at once.
Flatten a tensor.
Cumulative sum of the values in a tensor, alongside the specified axis.
Default float type
Retrieves the elements of indices indices
in the tensor reference
.
Returns the gradients of variables
w.r.t. loss
.
2D convolution.
Fuzz factor used in numeric expressions.
Returns the dtype of a Keras tensor or variable, as a string.
Depthwise 2D convolution with separable filters.
Evaluates the value of a variable.
Returns a tensor with the same content as the input tensor.
Element-wise exponential.
Instantiates an all-ones variable of the same shape as another tensor.
Reset graph identifiers.
Instantiates an all-ones tensor variable and returns it.
Default image data format convention ('channels_first' or 'channels_last').
Repeats the elements of a tensor along an axis.
Element-wise truth value of (x > y).
Computes sin of x element-wise.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
2D deconvolution (i.e. transposed convolution).
Multiplies 2 tensors (and/or variables) and returns a tensor .
TF session to be used by the backend.
Exponential linear unit.
Returns whether x
is a Keras tensor.
Reduce elems using fn to combine them from left to right.
Maximum value in a tensor.
Reduce elems using fn to combine them from right to left.
Get the uid for the default graph.
Returns whether x
is a placeholder.
Softmax of a tensor.
Element-wise truth value of (x >= y).
Element-wise maximum of two tensors.
Returns a tensor with random binomial distribution of values.
Multiplies the values in a tensor, alongside the specified axis.
Element-wise sigmoid.
Returns the number of axes in a tensor, as an integer.
Returns whether a tensor is a sparse tensor.
Returns whether x
is a symbolic tensor.
Apply 1D conv with un-shared weights.
Segment-wise linear approximation of sigmoid.
Apply 2D conv with un-shared weights.
Returns the learning phase flag.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Pads the middle dimension of a 3D tensor.
2D Pooling.
Element-wise sign.
Constructs a DenseFeatures.
Softmax activation function.
Additive attention layer, a.k.a. Bahdanau-style attention
Creates a tensor by tiling x
by n
.
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Applies Alpha Dropout to the input.
Depthwise separable 2D convolution.
Selects x
in test phase, and alt
otherwise.
Sets entries in x
to zero at random, while scaling the entire tensor.
Stacks a list of rank R
tensors into a rank R+1
tensor.
Returns whether the targets
are in the top k
predictions
.
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
Element-wise equality between two tensors.
Adds a 1-sized dimension at index axis
.
k_manual_variable_initialization
Sets the manual variable initialization flag.
Mean of a tensor, alongside the specified axis.
Map the function fn over the elements elems and return the outputs.
3D Pooling.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Returns a tensor with uniform distribution of values.
Standard deviation of a tensor, alongside the specified axis.
Computes log(sum(exp(elements across dimensions of a tensor))).
Element-wise log.
Element-wise exponentiation.
Sets the value of a variable, from an R array.
Minimum value in a tensor.
Prints message
and the tensor value when evaluated.
Reverse a tensor along the specified axes.
Resizes the volume contained in a 5D tensor.
Returns a tensor with truncated random normal distribution of values.
Gated Recurrent Unit - Cho et al.
Long Short-Term Memory unit - Hochreiter 1997.
Instantiates a placeholder tensor and returns it.
Computes the one-hot representation of an integer tensor.
Repeats a 2D tensor.
Permutes axes in a tensor.
Element-wise inequality between two tensors.
Rectified linear unit.
Returns the symbolic shape of a tensor or variable.
Softsign of a tensor.
Softplus of a tensor.
Update the value of x
to new_x
.
Iterates over the time dimension of a tensor
Element-wise tanh.
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Layer that averages a list of inputs.
Keras Model
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Switches between two operations depending on a scalar value.
Creates attention layer
Keras array object
A preprocessing layer which encodes integer features.
Cell class for the LSTM layer
Reshapes an output to a certain shape.
Instantiate an identity matrix and returns it.
Element-wise truth value of (x < y).
Selects x
in train phase, and alt
otherwise.
Returns the value of a variable.
Returns the shape of a variable.
Returns the shape of tensor or variable as a list of int or NULL entries.
Element-wise truth value of (x <= y).
Update the value of x
by adding increment
.
Element-wise square.
Image resizing layer
Element-wise rounding to the closest integer.
Element-wise minimum of two tensors.
Compute the moving average of a variable.
Returns a tensor with normal distribution of values.
Update the value of x
by subtracting decrement
.
A preprocessing layer which buckets continuous features by ranges.
Crop the central portion of the images to target height and width
Convolutional LSTM.
3D Convolutional LSTM
Layer that computes a dot product between samples in two tensors.
Input layer
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Reshapes a tensor to the specified shape.
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
Instantiates a variable with values drawn from a normal distribution.
Apply an activation function to an output.
Exponential Linear Unit.
Pads the 2nd and 3rd dimensions of a 4D tensor.
A preprocessing layer which maps integer features to contiguous ranges.
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
Resizes the images contained in a 4D tensor.
1D convolution layer (e.g. temporal convolution).
Average pooling operation for 3D data (spatial or spatio-temporal).
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
Layer that concatenates a list of inputs.
2D convolution with separable filters.
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
Flattens an input
Computes the (weighted) mean of the given values
Fast GRU implementation backed by CuDNN . metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Sets the learning phase to a fixed value.
Instantiates an all-zeros variable of the same shape as another tensor.
Create a Keras custom model
Instantiates an all-zeros variable and returns it.
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
Computes the precision of the predictions with respect to the labels
metric_precision_at_recall
Computes best precision where recall is >= specified value
Randomly flip each image horizontally and vertically
Separable 2D convolution.
Randomly vary the height of a batch of images during training
Fully-connected RNN where the output is to be fed back to input.
Upsampling layer for 2D inputs.
Upsampling layer for 1D inputs.
Removes a 1-dimension from the tensor at index axis
.
Adam optimizer
Calculates the number of true negatives
Adamax optimizer
Keras Model composed of a linear stack of layers
Element-wise square root.
Transposes a tensor and returns it.
Converts a sparse tensor into a dense tensor and returns it.
Pads 5D tensor with zeros along the depth, height, width dimensions.
R interface to Keras
Generates predictions for the input samples from a data generator.
Apply multiplicative 1-centered Gaussian noise.
Transposed 2D convolution layer (sometimes called Deconvolution).
Main Keras module
Upsampling layer for 3D inputs.
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
Fast LSTM implementation backed by CuDNN . layer_global_average_pooling_2d
Global average pooling operation for spatial data.
Add a densely-connected NN layer to an output
3D convolution layer (e.g. spatial convolution over volumes).
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
Zero-padding layer for 1D input (e.g. temporal sequence).
Computes the cosine similarity between the labels and predictions
Returns predictions for a single batch of samples.
Wraps arbitrary expression as a layer
Returns variables
but with zero gradient w.r.t. every other variable.
Sum of the values in a tensor, alongside the specified axis.
Calculates the number of false negatives
Variance of a tensor, alongside the specified axis.
Converts a text to a sequence of indexes in a fixed-size hashing space.
text_dataset_from_directory
Generate a tf.data.Dataset
from text files in a directory
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
Computes the mean Intersection-Over-Union metric
Instantiates a variable and returns it.
Computes the recall of the predictions with respect to the labels
Layer that adds a list of inputs.
1D Convolutional LSTM
Transposed 3D convolution layer (sometimes called Deconvolution).
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
A preprocessing layer which normalizes continuous features.
Permute the dimensions of an input according to a given pattern
metric_recall_at_precision
Computes best recall where precision is >= specified value
Adadelta optimizer.
Applies Dropout to the input.
Scaled Exponential Linear Unit.
Rectified Linear Unit activation function
Average pooling for temporal data.
Depthwise separable 1D convolution.
Base class for recurrent layers
Wrapper allowing a stack of RNN cells to behave as a single cell
Max pooling operation for temporal data.
Masks a sequence by using a mask value to skip timesteps.
layer_multi_head_attention
MultiHeadAttention layer
Randomly vary the width of a batch of images during training
A preprocessing layer which randomly zooms images during training.
Layer that multiplies (element-wise) a list of inputs.
Cell class for SimpleRNN
Stochastic gradient descent optimizer
Adagrad optimizer.
A preprocessing layer which maps string features to integer indices.
(Deprecated) loss_cosine_proximity
Loss functions
Max pooling operation for 3D data (spatial or spatio-temporal).
Adjust the contrast of an image or images by a random factor
Max pooling operation for spatial data.
Turns positive integers (indexes) into dense vectors of fixed size.
Calculates how often predictions match binary labels
Transposed 1D convolution layer (sometimes called Deconvolution).
Average pooling operation for spatial data.
Randomly crop the images to target height and width
Spatial 1D version of Dropout.
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
Generates a word rank-based probabilistic sampling table.
Calculates the number of false positives
metric-or-Metric
Cropping layer for 1D input (e.g. temporal sequence).
2D convolution layer (e.g. spatial convolution over images).
Computes the hinge metric between y_true
and y_pred
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
metric_mean_squared_error
Computes the mean squared error between labels and predictions
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
Layer that subtracts two inputs.
Cropping layer for 2D input (e.g. picture).
Apply additive zero-centered Gaussian noise.
Pads sequences to the same length
L1 and L2 regularization
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Computes the logarithm of the hyperbolic cosine of the prediction error
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Zero-padding layer for 2D input (e.g. picture).
Zero-padding layer for 3D data (spatial or spatio-temporal).
A preprocessing layer which maps text features to integer sequences.
Reset the states for a layer
Computes the element-wise (weighted) mean of the given tensors
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
Cell class for the GRU layer
Wraps a stateless metric function with the Mean metric
(Deprecated) Generates probability or class probability predictions for the input samples.
Layer that computes the minimum (element-wise) a list of inputs.
Layer that computes the maximum (element-wise) a list of inputs.
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
A preprocessing layer which hashes and bins categorical features.
Convert text to a sequence of words (or tokens).
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Utility function for generating batches of temporal data.
Randomly rotate each image
Objects exported from other packages
Calculates how often predictions equal labels
Repeats the input n times.
Spatial 2D version of Dropout.
Randomly translate each image during training
Spatial 3D version of Dropout.
Multiply inputs by scale
and adds offset
Computes the Poisson metric between y_true
and y_pred
Computes the squared hinge metric
sequential_model_input_layer
sequential_model_input_layer
Computes the (weighted) sum of the given values
Single gradient update or model evaluation over one batch of samples.
RMSProp optimizer
Nesterov Adam optimizer
Serialize a model to an R object
Save/Load model weights using HDF5 files
Save model weights in the SavedModel format
Converts a class vector (integers) to binary class matrix.
Approximates the AUC (Area under the curve) of the ROC or PR curves
Model configuration as JSON
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
(Deprecated) metric_cosine_proximity
Computes the categorical hinge metric between y_true
and y_pred
Calculates the number of true positives
Transform each text in texts in a sequence of integers.
Load a Keras model from the Saved Model format
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Select a Keras implementation and backend
Export to Saved Model format
Replicates a model on different GPUs.
Normalize a matrix or nd-array
plot.keras_training_history
Plot training history
Save a text tokenizer to an external file
Pipe operator
Remove the last layer in a model
Model configuration as YAML
Assign values to names
predict.keras.engine.training.Model
Generate predictions from a Keras model
Save/Load models using HDF5 files
Convert a list of sequences into a matrix.
summary.keras.engine.training.Model
Print a summary of a Keras model
Generates skipgram word pairs.
Save/Load models using SavedModel format
Provide a scope with mappings of names to custom objects
Text tokenization utility
This layer wrapper allows to apply a layer to every temporal slice of an input
timeseries_dataset_from_array
Creates a dataset of sliding windows over a timeseries provided as array
Convert a list of texts to a matrix.