Instantiates the EfficientNetB0 architecture
(Deprecated) Create a custom Layer
Fits the state of the preprocessing layer to the data being passed
Metric
Instantiates the DenseNet architecture.
Activation functions
(Deprecated) Base R6 class for Keras layers
(Deprecated) Base R6 class for Keras callbacks
(Deprecated) Base R6 class for Keras constraints
(Deprecated) Base R6 class for Keras wrappers
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Instantiates the ResNet architecture
Inception V3 model, with weights pre-trained on ImageNet.
MobileNetV2 model architecture
Instantiates the Xception architecture
Keras backend tensor engine
MobileNet model architecture.
Instantiates the MobileNetV3Large architecture
VGG16 and VGG19 models for Keras.
Instantiates a NASNet model.
callback_model_checkpoint
Save the model after every epoch.
Bidirectional wrapper for RNNs
callback_backup_and_restore
Callback to back up and restore the training state
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
Stop training when a monitored quantity has stopped improving.
Create a custom callback
Callback that streams epoch results to a csv file
Callback that prints metrics to stdout.
Callback used to stream events to a server.
callback_learning_rate_scheduler
Learning rate scheduler.
TensorBoard basic visualizations
Weight constraints
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
Count the total number of scalars composing the weights.
compile.keras.engine.training.Model
Configure a Keras model for training
Clone a model instance.
Create a Keras Layer wrapper
Create a Keras Layer
MNIST database of handwritten digits
IMDB Movie reviews sentiment classification
Reuters newswire topics classification
evaluate.keras.engine.training.Model
Evaluate a Keras model
Custom metric function
CIFAR100 small image classification
(Deprecated) Create a Keras Wrapper
Fashion-MNIST database of fashion articles
CIFAR10 small image classification
Boston housing price regression dataset
export_savedmodel.keras.engine.training.Model
Export a Saved Model
(Deprecated) Evaluates the model on a data generator.
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
fit.keras.engine.training.Model
Train a Keras model
Generates batches of augmented/normalized data from image data and labels
Fit image data generator internal statistics to some sample data.
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
Retrieve the next item from a generator
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
Freeze and unfreeze weights
(Deprecated) Fits the model on data yielded batch-by-batch by a generator.
Layer/Model configuration
Representation of HDF5 dataset to be used instead of an R array
Retrieve tensors for layers with multiple nodes
Downloads a file from a URL if it not already in the cache.
Deprecated Generate batches of image data with real-time data augmentation.
The data will be looped over (in batches).Layer/Model weights as R arrays
Retrieves a layer based on either its name (unique) or index.
Make a python class constructor
Make an Active Binding
Loads an image into PIL format.
image_dataset_from_directory
Create a dataset from a directory
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
Initializer that generates tensors initialized to a constant value.
Keras implementation
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
Initializer that generates the identity matrix.
He uniform variance scaling initializer.
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
3D array representation of images
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
He normal initializer.
LeCun normal initializer.
initializer_lecun_uniform
LeCun uniform initializer.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
Initializer that generates tensors initialized to 0.
Install TensorFlow and Keras, including all Python dependencies
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
Returns the index of the maximum value along an axis.
Creates a 1D tensor containing a sequence of integers.
Batchwise dot product.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Cast an array to the default Keras float type.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Active Keras backend
Adds a bias vector to a tensor.
Sets the values of many tensor variables at once.
Returns the index of the minimum value along an axis.
Applies batch normalization on x given mean, var, beta and gamma.
Casts a tensor to a different dtype and returns it.
Binary crossentropy between an output tensor and a target tensor.
Returns the value of more than one tensor variable.
Bitwise reduction (logical AND).
Bitwise reduction (logical OR).
Element-wise absolute value.
Check if Keras is Available
Element-wise value clipping.
Destroys the current TF graph and creates a new one.
Initializer that generates tensors initialized to 1.
Initializer that generates a random orthogonal matrix.
2D convolution.
1D convolution.
2D deconvolution (i.e. transposed convolution).
Runs CTC loss algorithm on each batch element.
Returns the static number of elements in a Keras variable or tensor.
Concatenates a list of tensors alongside the specified axis.
3D deconvolution (i.e. transposed convolution).
Cumulative sum of the values in a tensor, alongside the specified axis.
Cumulative product of the values in a tensor, alongside the specified axis.
Computes cos of x element-wise.
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Decodes the output of a softmax.
Fuzz factor used in numeric expressions.
Exponential linear unit.
Element-wise exponential.
3D convolution.
Depthwise 2D convolution with separable filters.
Adds a 1-sized dimension at index axis
.
Returns the value of a variable.
Get the uid for the default graph.
Instantiate an identity matrix and returns it.
Instantiates a Keras function
Reduce elems using fn to combine them from right to left.
Multiplies 2 tensors (and/or variables) and returns a tensor .
Creates a constant tensor.
Returns whether x
is a symbolic tensor.
Returns a tensor with the same content as the input tensor.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Segment-wise linear approximation of sigmoid.
Flatten a tensor.
Returns whether x
is a placeholder.
Returns whether a tensor is a sparse tensor.
Returns the gradients of variables
w.r.t. loss
.
Returns the shape of a variable.
Element-wise truth value of (x < y).
Returns the learning phase flag.
Returns the dtype of a Keras tensor or variable, as a string.
Default image data format convention ('channels_first' or 'channels_last').
Reduce elems using fn to combine them from left to right.
Sets entries in x
to zero at random, while scaling the entire tensor.
TF session to be used by the backend.
Selects x
in test phase, and alt
otherwise.
Retrieves the elements of indices indices
in the tensor reference
.
Default float type
Evaluates the value of a variable.
Element-wise equality between two tensors.
Returns the shape of tensor or variable as a list of int or NULL entries.
Element-wise truth value of (x <= y).
Returns whether x
is a Keras tensor.
Element-wise truth value of (x > y).
Maximum value in a tensor.
Element-wise maximum of two tensors.
Map the function fn over the elements elems and return the outputs.
Selects x
in train phase, and alt
otherwise.
3D Pooling.
Element-wise truth value of (x >= y).
Returns whether the targets
are in the top k
predictions
.
Returns the number of axes in a tensor, as an integer.
Compute the moving average of a variable.
Mean of a tensor, alongside the specified axis.
Apply 2D conv with un-shared weights.
Element-wise log.
Instantiates a variable with values drawn from a normal distribution.
Element-wise exponentiation.
Sets the learning phase to a fixed value.
Sets the value of a variable, from an R array.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Rectified linear unit.
k_manual_variable_initialization
Sets the manual variable initialization flag.
(Deprecated) Computes log(sum(exp(elements across dimensions of a tensor))).
Element-wise inequality between two tensors.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Returns a tensor with random binomial distribution of values.
Returns the symbolic shape of a tensor or variable.
Returns a tensor with uniform distribution of values.
Instantiates an all-ones variable of the same shape as another tensor.
Instantiates an all-ones tensor variable and returns it.
Prints message
and the tensor value when evaluated.
Computes the one-hot representation of an integer tensor.
Removes a 1-dimension from the tensor at index axis
.
Multiplies the values in a tensor, alongside the specified axis.
Apply 1D conv with un-shared weights.
Element-wise rounding to the closest integer.
Returns a tensor with normal distribution of values.
Instantiates an all-zeros variable of the same shape as another tensor.
Instantiates an all-zeros variable and returns it.
Permutes axes in a tensor.
Resizes the images contained in a 4D tensor.
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Resizes the volume contained in a 5D tensor.
Element-wise sigmoid.
Keras Model
Keras array object
Softmax of a tensor.
Softplus of a tensor.
Creates a tensor by tiling x
by n
.
Converts a sparse tensor into a dense tensor and returns it.
Additive attention layer, a.k.a. Bahdanau-style attention
Applies Alpha Dropout to the input.
2D convolution with separable filters.
Minimum value in a tensor.
Reset graph identifiers.
Keras Model composed of a linear stack of layers
R interface to Keras
Computes sin of x element-wise.
Element-wise sign.
Reshapes a tensor to the specified shape.
Dot-product attention layer, a.k.a. Luong-style attention
Main Keras module
Standard deviation of a tensor, alongside the specified axis.
Stacks a list of rank R
tensors into a rank R+1
tensor.
Transposed 1D convolution layer (sometimes called Deconvolution).
Element-wise minimum of two tensors.
2D convolution layer (e.g. spatial convolution over images).
Iterates over the time dimension of a tensor
(Deprecated) Create a Keras custom model
Variance of a tensor, alongside the specified axis.
Element-wise square root.
Sum of the values in a tensor, alongside the specified axis.
3D convolution layer (e.g. spatial convolution over volumes).
Transposed 2D convolution layer (sometimes called Deconvolution).
Element-wise square.
Reverse a tensor along the specified axes.
Pads the 2nd and 3rd dimensions of a 4D tensor.
Pads 5D tensor with zeros along the depth, height, width dimensions.
Instantiates a placeholder tensor and returns it.
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
2D Pooling.
Switches between two operations depending on a scalar value.
Update the value of x
by adding increment
.
Transposes a tensor and returns it.
Returns a tensor with truncated random normal distribution of values.
Average pooling for temporal data.
Average pooling operation for 3D data (spatial or spatio-temporal).
Scaled Exponential Linear Unit.
Instantiates a variable and returns it.
Average pooling operation for spatial data.
layer_batch_normalization
Layer that normalizes its inputs
Rectified Linear Unit activation function
Softsign of a tensor.
Repeats a 2D tensor.
Apply additive zero-centered Gaussian noise.
Apply multiplicative 1-centered Gaussian noise.
(Deprecated) Fast GRU implementation backed by CuDNN . Cropping layer for 3D data (e.g. spatial or spatio-temporal).
Repeats the elements of a tensor along an axis.
Convolutional LSTM.
3D Convolutional LSTM
Returns variables
but with zero gradient w.r.t. every other variable.
Depthwise 1D convolution
A preprocessing layer which buckets continuous features by ranges.
Gated Recurrent Unit - Cho et al.
Cell class for the GRU layer
Depthwise separable 2D convolution.
Add a densely-connected NN layer to an output
Constructs a DenseFeatures.
Layer that averages a list of inputs.
Layer that concatenates a list of inputs.
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
1D convolution layer (e.g. temporal convolution).
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
(Deprecated) Fast LSTM implementation backed by CuDNN . k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Flattens an input
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
Turns positive integers (indexes) into dense vectors of fixed size
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
Unstack rank R
tensor into a list of rank R-1
tensors.
Update the value of x
to new_x
.
Element-wise tanh.
Apply an activation function to an output.
Pads the middle dimension of a 3D tensor.
Softmax activation function.
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Layer that computes a dot product between samples in two tensors.
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Wraps arbitrary expression as a layer
Cell class for the LSTM layer
Masks a sequence by using a mask value to skip timesteps.
Applies Dropout to the input.
1D Convolutional LSTM
A preprocessing layer which maps integer features to contiguous ranges.
Max pooling operation for temporal data.
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
Transposed 3D convolution layer (sometimes called Deconvolution).
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Max pooling operation for spatial data.
Layer that adds a list of inputs.
Layer that computes the minimum (element-wise) a list of inputs.
Layer that computes the maximum (element-wise) a list of inputs.
Max pooling operation for 3D data (spatial or spatio-temporal).
Layer that multiplies (element-wise) a list of inputs.
A preprocessing layer which normalizes continuous features.
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
Permute the dimensions of an input according to a given pattern
A preprocessing layer which randomly adjusts brightness during training
Randomly crop the images to target height and width
Adjust the contrast of an image or images by a random factor
layer_multi_head_attention
MultiHeadAttention layer
Reshapes an output to a certain shape.
Image resizing layer
Randomly vary the height of a batch of images during training
Randomly flip each image horizontally and vertically
Randomly rotate each image
Repeats the input n times.
Randomly translate each image during training
Multiply inputs by scale
and adds offset
Cropping layer for 2D input (e.g. picture).
Update the value of x
by subtracting decrement
.
A preprocessing layer which encodes integer features.
Exponential Linear Unit.
Cropping layer for 1D input (e.g. temporal sequence).
Crop the central portion of the images to target height and width
A preprocessing layer which randomly zooms images during training.
Randomly vary the width of a batch of images during training
Long Short-Term Memory unit - Hochreiter 1997.
A preprocessing layer which hashes and bins categorical features.
Input layer
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
Spatial 1D version of Dropout.
Cell class for SimpleRNN
Spatial 2D version of Dropout.
A preprocessing layer which maps string features to integer indices.
Fully-connected RNN where the output is to be fed back to input.
Depthwise separable 1D convolution.
Separable 2D convolution.
Wrapper allowing a stack of RNN cells to behave as a single cell
Base class for recurrent layers
Unit normalization layer
Upsampling layer for 1D inputs.
Layer that subtracts two inputs.
Spatial 3D version of Dropout.
learning_rate_schedule_cosine_decay
A LearningRateSchedule that uses a cosine decay schedule
Zero-padding layer for 3D data (spatial or spatio-temporal).
Upsampling layer for 3D inputs.
A preprocessing layer which maps text features to integer sequences.
Upsampling layer for 2D inputs.
Zero-padding layer for 1D input (e.g. temporal sequence).
Zero-padding layer for 2D input (e.g. picture).
learning_rate_schedule_inverse_time_decay
A LearningRateSchedule that uses an inverse time decay schedule
learning_rate_schedule_exponential_decay
A LearningRateSchedule that uses an exponential decay schedule
(Deprecated) loss_cosine_proximity
learning_rate_schedule_piecewise_constant_decay
A LearningRateSchedule that uses a piecewise constant decay schedule
metric-or-Metric
Generates a word rank-based probabilistic sampling table.
learning_rate_schedule_polynomial_decay
A LearningRateSchedule that uses a polynomial decay schedule
Loss functions
learning_rate_schedule_cosine_decay_restarts
A LearningRateSchedule that uses a cosine decay schedule with restarts
Calculates how often predictions equal labels
(Deprecated) metric_cosine_proximity
metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Approximates the AUC (Area under the curve) of the ROC or PR curves
Computes the cosine similarity between the labels and predictions
Calculates the number of false negatives
Calculates the number of false positives
Computes the categorical hinge metric between y_true
and y_pred
Calculates how often predictions match binary labels
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
Computes the hinge metric between y_true
and y_pred
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
Computes the mean Intersection-Over-Union metric
Computes the (weighted) mean of the given values
Computes the logarithm of the hyperbolic cosine of the prediction error
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
metric_mean_squared_error
Computes the mean squared error between labels and predictions
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
metric_precision_at_recall
Computes best precision where recall is >= specified value
metric_recall_at_precision
Computes best recall where precision is >= specified value
Computes the element-wise (weighted) mean of the given tensors
Computes the recall of the predictions with respect to the labels
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
Computes the Poisson metric between y_true
and y_pred
Computes the precision of the predictions with respect to the labels
Wraps a stateless metric function with the Mean metric
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
Computes the squared hinge metric
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
Computes the (weighted) sum of the given values
Load a Keras model from the Saved Model format
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Calculates the number of true positives
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
Calculates the number of true negatives
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Model configuration as JSON
Assign values to names
new_learning_rate_schedule_class
Create a new learning rate schedule type
Optimizer that implements the Adam algorithm
(Deprecated) Export to Saved Model format
Optimizer that implements the Adagrad algorithm
Define new keras types
Normalize a matrix or nd-array
Optimizer that implements the Adadelta algorithm
Model configuration as YAML
(Deprecated) Replicates a model on different GPUs.
Gradient descent (with momentum) optimizer
Optimizer that implements the Nadam algorithm
Remove the last layer in a model
Optimizer that implements the RMSprop algorithm
Pads sequences to the same length
plot.keras.engine.training.Model
Plot a Keras model
Pipe operator
Optimizer that implements the Adamax algorithm
plot.keras_training_history
Plot training history
Optimizer that implements the FTRL algorithm
Save/Load models using SavedModel format
Save/Load models using HDF5 files
Returns predictions for a single batch of samples.
(Deprecated) Generates probability or class probability predictions for the input samples.
L1 and L2 regularization
Reset the states for a layer
A regularizer that encourages input vectors to be orthogonal to each other
predict.keras.engine.training.Model
Generate predictions from a Keras model
Objects exported from other packages
(Deprecated) Generates predictions for the input samples from a data generator.
Generates skipgram word pairs.
summary.keras.engine.training.Model
Print a summary of a Keras model
Serialize a model to an R object
Save model weights in the SavedModel format
Save/Load model weights using HDF5 files
sequential_model_input_layer
sequential_model_input_layer
Converts a text to a sequence of indexes in a fixed-size hashing space.
text_dataset_from_directory
Generate a tf.data.Dataset
from text files in a directory
Convert a list of sequences into a matrix.
Save a text tokenizer to an external file
Convert text to a sequence of words (or tokens).
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Converts a class vector (integers) to binary class matrix.
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Convert a list of texts to a matrix.
Text tokenization utility
This layer wrapper allows to apply a layer to every temporal slice of an input
Transform each text in texts in a sequence of integers.
Utility function for generating batches of temporal data.
timeseries_dataset_from_array
Creates a dataset of sliding windows over a timeseries provided as array
Provide a scope with mappings of names to custom objects
zip lists
Single gradient update or model evaluation over one batch of samples.
Select a Keras implementation and backend