Instantiates the DenseNet architecture.
Activation functions
(Deprecated) Base R6 class for Keras wrappers
Instantiates the EfficientNetB0 architecture
Fits the state of the preprocessing layer to the data being passed
Metric
(Deprecated) Base R6 class for Keras callbacks
(Deprecated) Base R6 class for Keras constraints
(Deprecated) Create a custom Layer
(Deprecated) Base R6 class for Keras layers
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Inception V3 model, with weights pre-trained on ImageNet.
Instantiates the MobileNetV3Large architecture
Instantiates the ResNet architecture
MobileNet model architecture.
Instantiates the Xception architecture
Keras backend tensor engine
VGG16 and VGG19 models for Keras.
MobileNetV2 model architecture
Instantiates a NASNet model.
Bidirectional wrapper for RNNs
Callback that prints metrics to stdout.
callback_model_checkpoint
Save the model after every epoch.
Stop training when a monitored quantity has stopped improving.
Callback that streams epoch results to a csv file
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
callback_learning_rate_scheduler
Learning rate scheduler.
callback_backup_and_restore
Callback to back up and restore the training state
Callback used to stream events to a server.
Create a custom callback
Create a Keras Layer
Weight constraints
Count the total number of scalars composing the weights.
TensorBoard basic visualizations
(Deprecated) Create a Keras Wrapper
Custom metric function
compile.keras.engine.training.Model
Configure a Keras model for training
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
Clone a model instance.
Create a Keras Layer wrapper
Reuters newswire topics classification
evaluate.keras.engine.training.Model
Evaluate a Keras model
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Boston housing price regression dataset
CIFAR100 small image classification
(Deprecated) Evaluates the model on a data generator.
IMDB Movie reviews sentiment classification
CIFAR10 small image classification
MNIST database of handwritten digits
fit.keras.engine.training.Model
Train a Keras model
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
Retrieve the next item from a generator
(Deprecated) Fits the model on data yielded batch-by-batch by a generator.
Fashion-MNIST database of fashion articles
Freeze and unfreeze weights
Generates batches of augmented/normalized data from image data and labels
Layer/Model configuration
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
Fit image data generator internal statistics to some sample data.
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
Loads an image into PIL format.
Make an Active Binding
Retrieve tensors for layers with multiple nodes
image_dataset_from_directory
Create a dataset from a directory
Make a python class constructor
Downloads a file from a URL if it not already in the cache.
Layer/Model weights as R arrays
Representation of HDF5 dataset to be used instead of an R array
Generate batches of image data with real-time data augmentation. The data will be
looped over (in batches).
Retrieves a layer based on either its name (unique) or index.
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
Initializer that generates the identity matrix.
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
Initializer that generates tensors initialized to a constant value.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
He uniform variance scaling initializer.
3D array representation of images
Keras implementation
He normal initializer.
Initializer that generates tensors initialized to 1.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
initializer_lecun_uniform
LeCun uniform initializer.
LeCun normal initializer.
Install TensorFlow and Keras, including all Python dependencies
Initializer that generates tensors initialized to 0.
Initializer that generates a random orthogonal matrix.
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
Batchwise dot product.
Returns the index of the minimum value along an axis.
Active Keras backend
Check if Keras is Available
Returns the index of the maximum value along an axis.
Bitwise reduction (logical OR).
Creates a 1D tensor containing a sequence of integers.
Element-wise absolute value.
Bitwise reduction (logical AND).
Turn a nD tensor into a 2D tensor with same 1st dimension.
Adds a bias vector to a tensor.
Sets the values of many tensor variables at once.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Binary crossentropy between an output tensor and a target tensor.
Element-wise value clipping.
Destroys the current TF graph and creates a new one.
Casts a tensor to a different dtype and returns it.
Applies batch normalization on x given mean, var, beta and gamma.
Cast an array to the default Keras float type.
Returns the value of more than one tensor variable.
3D convolution.
2D deconvolution (i.e. transposed convolution).
Concatenates a list of tensors alongside the specified axis.
1D convolution.
3D deconvolution (i.e. transposed convolution).
Computes cos of x element-wise.
Creates a constant tensor.
Depthwise 2D convolution with separable filters.
Sets entries in x
to zero at random, while scaling the entire tensor.
Cumulative sum of the values in a tensor, alongside the specified axis.
Cumulative product of the values in a tensor, alongside the specified axis.
Returns the dtype of a Keras tensor or variable, as a string.
Returns the static number of elements in a Keras variable or tensor.
Runs CTC loss algorithm on each batch element.
2D convolution.
Element-wise exponential.
Reduce elems using fn to combine them from left to right.
Element-wise equality between two tensors.
Default float type
Multiplies 2 tensors (and/or variables) and returns a tensor .
Fuzz factor used in numeric expressions.
Instantiate an identity matrix and returns it.
Exponential linear unit.
Returns the value of a variable.
Get the uid for the default graph.
Decodes the output of a softmax.
Evaluates the value of a variable.
Adds a 1-sized dimension at index axis
.
Reduce elems using fn to combine them from right to left.
Flatten a tensor.
Instantiates a Keras function
Returns whether x
is a symbolic tensor.
TF session to be used by the backend.
Retrieves the elements of indices indices
in the tensor reference
.
Returns the gradients of variables
w.r.t. loss
.
(Deprecated) Computes log(sum(exp(elements across dimensions of a tensor))).
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Returns the shape of a variable.
Default image data format convention ('channels_first' or 'channels_last').
Selects x
in test phase, and alt
otherwise.
Returns the shape of tensor or variable as a list of int or NULL entries.
Returns a tensor with the same content as the input tensor.
Segment-wise linear approximation of sigmoid.
Returns whether a tensor is a sparse tensor.
Returns whether x
is a placeholder.
k_manual_variable_initialization
Sets the manual variable initialization flag.
Element-wise minimum of two tensors.
Minimum value in a tensor.
3D Pooling.
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Maximum value in a tensor.
Map the function fn over the elements elems and return the outputs.
Instantiates an all-ones tensor variable and returns it.
Element-wise maximum of two tensors.
Element-wise truth value of (x <= y).
Mean of a tensor, alongside the specified axis.
Apply 1D conv with un-shared weights.
Returns a tensor with uniform distribution of values.
Element-wise exponentiation.
Instantiates a variable with values drawn from a normal distribution.
Element-wise truth value of (x > y).
Element-wise inequality between two tensors.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
2D Pooling.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Returns a tensor with random binomial distribution of values.
Returns a tensor with normal distribution of values.
Iterates over the time dimension of a tensor
Instantiates a placeholder tensor and returns it.
Element-wise truth value of (x >= y).
Reverse a tensor along the specified axes.
Pads 5D tensor with zeros along the depth, height, width dimensions.
Returns whether the targets
are in the top k
predictions
.
Pads the 2nd and 3rd dimensions of a 4D tensor.
Selects x
in train phase, and alt
otherwise.
Reset graph identifiers.
Prints message
and the tensor value when evaluated.
Reshapes a tensor to the specified shape.
2D convolution with separable filters.
Multiplies the values in a tensor, alongside the specified axis.
Softsign of a tensor.
Computes the one-hot representation of an integer tensor.
Element-wise truth value of (x < y).
Element-wise sign.
Rectified linear unit.
Returns the learning phase flag.
Repeats a 2D tensor.
Sets the learning phase to a fixed value.
Instantiates an all-ones variable of the same shape as another tensor.
Element-wise log.
Apply 2D conv with un-shared weights.
Sets the value of a variable, from an R array.
Compute the moving average of a variable.
Returns whether x
is a Keras tensor.
Computes sin of x element-wise.
Returns the number of axes in a tensor, as an integer.
Element-wise rounding to the closest integer.
Returns variables
but with zero gradient w.r.t. every other variable.
Sum of the values in a tensor, alongside the specified axis.
Repeats the elements of a tensor along an axis.
Resizes the images contained in a 4D tensor.
Stacks a list of rank R
tensors into a rank R+1
tensor.
Permutes axes in a tensor.
Instantiates an all-zeros variable and returns it.
Switches between two operations depending on a scalar value.
Resizes the volume contained in a 5D tensor.
Element-wise square root.
Returns the symbolic shape of a tensor or variable.
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Scaled Exponential Linear Unit.
Instantiates an all-zeros variable of the same shape as another tensor.
Rectified Linear Unit activation function
Variance of a tensor, alongside the specified axis.
Softplus of a tensor.
Softmax of a tensor.
Element-wise sigmoid.
Standard deviation of a tensor, alongside the specified axis.
Element-wise square.
Returns a tensor with truncated random normal distribution of values.
Transposes a tensor and returns it.
Keras Model
Exponential Linear Unit.
Update the value of x
by adding increment
.
Apply an activation function to an output.
Update the value of x
by subtracting decrement
.
Pads the middle dimension of a 3D tensor.
Unstack rank R
tensor into a list of rank R-1
tensors.
Removes a 1-dimension from the tensor at index axis
.
Element-wise tanh.
Update the value of x
to new_x
.
Creates a tensor by tiling x
by n
.
Layer that adds a list of inputs.
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
Instantiates a variable and returns it.
R interface to Keras
Converts a sparse tensor into a dense tensor and returns it.
Main Keras module
Keras Model composed of a linear stack of layers
Keras array object
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Average pooling for temporal data.
(Deprecated) Create a Keras custom model
Layer that concatenates a list of inputs.
1D convolution layer (e.g. temporal convolution).
Average pooling operation for spatial data.
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
Additive attention layer, a.k.a. Bahdanau-style attention
Applies Alpha Dropout to the input.
Dot-product attention layer, a.k.a. Luong-style attention
Layer that averages a list of inputs.
3D convolution layer (e.g. spatial convolution over volumes).
Transposed 2D convolution layer (sometimes called Deconvolution).
Softmax activation function.
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Transposed 1D convolution layer (sometimes called Deconvolution).
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Average pooling operation for 3D data (spatial or spatio-temporal).
A preprocessing layer which encodes integer features.
Crop the central portion of the images to target height and width
Cropping layer for 1D input (e.g. temporal sequence).
Cropping layer for 2D input (e.g. picture).
Transposed 3D convolution layer (sometimes called Deconvolution).
Flattens an input
Turns positive integers (indexes) into dense vectors of fixed size.
3D Convolutional LSTM
Apply multiplicative 1-centered Gaussian noise.
A preprocessing layer which hashes and bins categorical features.
Convolutional LSTM.
Layer that computes a dot product between samples in two tensors.
Input layer
Apply additive zero-centered Gaussian noise.
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
Applies Dropout to the input.
1D Convolutional LSTM
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
(Deprecated) Fast GRU implementation backed by CuDNN . Max pooling operation for temporal data.
Depthwise separable 2D convolution.
A preprocessing layer which buckets continuous features by ranges.
Cell class for the GRU layer
Gated Recurrent Unit - Cho et al.
Depthwise 1D convolution
Cell class for the LSTM layer
Max pooling operation for spatial data.
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
Constructs a DenseFeatures.
A preprocessing layer which randomly adjusts brightness during training
Permute the dimensions of an input according to a given pattern
2D convolution layer (e.g. spatial convolution over images).
Masks a sequence by using a mask value to skip timesteps.
Max pooling operation for 3D data (spatial or spatio-temporal).
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
Layer that computes the maximum (element-wise) a list of inputs.
Randomly rotate each image
Randomly translate each image during training
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
Add a densely-connected NN layer to an output
(Deprecated) Fast LSTM implementation backed by CuDNN . Long Short-Term Memory unit - Hochreiter 1997.
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
Adjust the contrast of an image or images by a random factor
Randomly crop the images to target height and width
Repeats the input n times.
Wraps arbitrary expression as a layer
Randomly flip each image horizontally and vertically
A preprocessing layer which normalizes continuous features.
Layer that multiplies (element-wise) a list of inputs.
Multiply inputs by scale
and adds offset
A preprocessing layer which maps string features to integer indices.
Wrapper allowing a stack of RNN cells to behave as a single cell
Spatial 1D version of Dropout.
Cell class for SimpleRNN
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Upsampling layer for 2D inputs.
Randomly vary the height of a batch of images during training
Base class for recurrent layers
Reshapes an output to a certain shape.
Image resizing layer
Depthwise separable 1D convolution.
Upsampling layer for 3D inputs.
Zero-padding layer for 1D input (e.g. temporal sequence).
Randomly vary the width of a batch of images during training
A preprocessing layer which maps integer features to contiguous ranges.
metric-or-Metric
Upsampling layer for 1D inputs.
Unit normalization layer
(Deprecated) loss_cosine_proximity
Layer that subtracts two inputs.
learning_rate_schedule_inverse_time_decay
A LearningRateSchedule that uses an inverse time decay schedule
Spatial 3D version of Dropout.
Generates a word rank-based probabilistic sampling table.
A preprocessing layer which maps text features to integer sequences.
layer_multi_head_attention
MultiHeadAttention layer
Layer that computes the minimum (element-wise) a list of inputs.
Spatial 2D version of Dropout.
Separable 2D convolution.
Zero-padding layer for 2D input (e.g. picture).
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
learning_rate_schedule_cosine_decay
A LearningRateSchedule that uses a cosine decay schedule
learning_rate_schedule_piecewise_constant_decay
A LearningRateSchedule that uses a piecewise constant decay schedule
Zero-padding layer for 3D data (spatial or spatio-temporal).
A preprocessing layer which randomly zooms images during training.
Calculates how often predictions equal labels
(Deprecated) metric_cosine_proximity
metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Computes the categorical hinge metric between y_true
and y_pred
Computes the cosine similarity between the labels and predictions
Fully-connected RNN where the output is to be fed back to input.
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
Computes the logarithm of the hyperbolic cosine of the prediction error
Computes the (weighted) mean of the given values
Computes the hinge metric between y_true
and y_pred
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
Loss functions
Calculates the number of false negatives
learning_rate_schedule_cosine_decay_restarts
A LearningRateSchedule that uses a cosine decay schedule with restarts
metric_mean_squared_error
Computes the mean squared error between labels and predictions
learning_rate_schedule_polynomial_decay
A LearningRateSchedule that uses a polynomial decay schedule
Calculates how often predictions match binary labels
Approximates the AUC (Area under the curve) of the ROC or PR curves
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
learning_rate_schedule_exponential_decay
A LearningRateSchedule that uses an exponential decay schedule
Computes the mean Intersection-Over-Union metric
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
Computes the Poisson metric between y_true
and y_pred
Computes the recall of the predictions with respect to the labels
metric_precision_at_recall
Computes best precision where recall is >= specified value
Calculates the number of false positives
metric_recall_at_precision
Computes best recall where precision is >= specified value
Computes the precision of the predictions with respect to the labels
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
Load a Keras model from the Saved Model format
Computes the squared hinge metric
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Computes the (weighted) sum of the given values
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
Calculates the number of true positives
Calculates the number of true negatives
Assign values to names
Define new keras types
new_learning_rate_schedule_class
Create a new learning rate schedule type
Model configuration as JSON
(Deprecated) Replicates a model on different GPUs.
Normalize a matrix or nd-array
Optimizer that implements the Adadelta algorithm
Model configuration as YAML
(Deprecated) Export to Saved Model format
Optimizer that implements the Adagrad algorithm
Wraps a stateless metric function with the Mean metric
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
Computes the element-wise (weighted) mean of the given tensors
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Optimizer that implements the Adam algorithm
plot.keras_training_history
Plot training history
plot.keras.engine.training.Model
Plot a Keras model
Pipe operator
Remove the last layer in a model
Pads sequences to the same length
(Deprecated) Generates predictions for the input samples from a data generator.
Optimizer that implements the FTRL algorithm
predict.keras.engine.training.Model
Generate predictions from a Keras model
Optimizer that implements the Adamax algorithm
Save/Load models using HDF5 files
Gradient descent (with momentum) optimizer
Reset the states for a layer
Save/Load models using SavedModel format
Optimizer that implements the Nadam algorithm
A regularizer that encourages input vectors to be orthogonal to each other
L1 and L2 regularization
Objects exported from other packages
Serialize a model to an R object
sequential_model_input_layer
sequential_model_input_layer
Save model weights in the SavedModel format
Text tokenization utility
Transform each text in texts in a sequence of integers.
timeseries_dataset_from_array
Creates a dataset of sliding windows over a timeseries provided as array
Save/Load model weights using HDF5 files
This layer wrapper allows to apply a layer to every temporal slice of an input
Generates skipgram word pairs.
summary.keras.engine.training.Model
Print a summary of a Keras model
Convert a list of texts to a matrix.
Optimizer that implements the RMSprop algorithm
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Returns predictions for a single batch of samples.
text_dataset_from_directory
Generate a tf.data.Dataset
from text files in a directory
Converts a class vector (integers) to binary class matrix.
Save a text tokenizer to an external file
Convert a list of sequences into a matrix.
(Deprecated) Generates probability or class probability predictions for the input samples.
Select a Keras implementation and backend
Convert text to a sequence of words (or tokens).
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Converts a text to a sequence of indexes in a fixed-size hashing space.
Single gradient update or model evaluation over one batch of samples.
Utility function for generating batches of temporal data.
zip lists
Provide a scope with mappings of names to custom objects