(Deprecated) Base R6 class for Keras wrappers
(Deprecated) Base R6 class for Keras layers
Metric
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
VGG16 and VGG19 models for Keras.
Instantiates the ResNet architecture
Inception V3 model, with weights pre-trained on ImageNet.
fit.keras.engine.training.Model
Train a Keras model
TensorBoard basic visualizations
Callback used to stream events to a server.
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Callback that streams epoch results to a csv file
Bidirectional wrapper for RNNs
Custom metric function
Stop training when a monitored quantity has stopped improving.
Create a Keras Layer wrapper
Boston housing price regression dataset
Create a custom callback
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
(Deprecated) Create a Keras Wrapper
Downloads a file from a URL if it not already in the cache.
Make an Active Binding
Layer/Model configuration
Representation of HDF5 dataset to be used instead of an R array
Loads an image into PIL format.
3D array representation of images
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
Install TensorFlow and Keras, including all Python dependencies
Check if Keras is Available
Bitwise reduction (logical OR).
Creates a 1D tensor containing a sequence of integers.
CIFAR10 small image classification
CIFAR100 small image classification
Initializer that generates the identity matrix.
Initializer that generates tensors initialized to 1.
LeCun normal initializer.
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
Element-wise absolute value.
initializer_lecun_uniform
LeCun uniform initializer.
Bitwise reduction (logical AND).
Generates batches of augmented/normalized data from image data and labels
Exponential linear unit.
Concatenates a list of tensors alongside the specified axis.
Element-wise value clipping.
Returns the dtype of a Keras tensor or variable, as a string.
Instantiates a NASNet model.
callback_model_checkpoint
Save the model after every epoch.
callback_learning_rate_scheduler
Learning rate scheduler.
Instantiates the MobileNetV3Large architecture
Adds a 1-sized dimension at index axis
.
Computes cos of x element-wise.
Instantiate an identity matrix and returns it.
Returns the value of a variable.
Layer/Model weights as R arrays
Create a Keras Layer
Reuters newswire topics classification
MNIST database of handwritten digits
Count the total number of scalars composing the weights.
(Deprecated) Fits the model on data yielded batch-by-batch by a generator.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
Make a python class constructor
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
Returns the static number of elements in a Keras variable or tensor.
Initializer that generates tensors initialized to 0.
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Returns the shape of a variable.
Fit image data generator internal statistics to some sample data.
Batchwise dot product.
Active Keras backend
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
Retrieves a layer based on either its name (unique) or index.
Retrieve tensors for layers with multiple nodes
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
Cumulative product of the values in a tensor, alongside the specified axis.
Returns the learning phase flag.
Element-wise exponential.
Evaluates the value of a variable.
TF session to be used by the backend.
Fits the state of the preprocessing layer to the data being passed
MobileNetV2 model architecture
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
MobileNet model architecture.
Activation functions
Adds a bias vector to a tensor.
Callback that prints metrics to stdout.
Get the uid for the default graph.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
Instantiates an all-ones variable of the same shape as another tensor.
Apply 2D conv with un-shared weights.
Apply 1D conv with un-shared weights.
Instantiates an all-ones tensor variable and returns it.
Applies batch normalization on x given mean, var, beta and gamma.
Returns the index of the maximum value along an axis.
Returns the index of the minimum value along an axis.
Computes sin of x element-wise.
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
2D Pooling.
3D Pooling.
Softmax of a tensor.
Sets the values of many tensor variables at once.
Creates a constant tensor.
Selects x
in test phase, and alt
otherwise.
Binary crossentropy between an output tensor and a target tensor.
Weight constraints
IMDB Movie reviews sentiment classification
Fashion-MNIST database of fashion articles
He normal initializer.
compile.keras.engine.training.Model
Configure a Keras model for training
Fuzz factor used in numeric expressions.
1D convolution.
Returns whether the targets
are in the top k
predictions
.
3D convolution.
Element-wise log.
(Deprecated) Computes log(sum(exp(elements across dimensions of a tensor))).
He uniform variance scaling initializer.
Element-wise equality between two tensors.
Exponential Linear Unit.
Switches between two operations depending on a scalar value.
Element-wise tanh.
Pads the 2nd and 3rd dimensions of a 4D tensor.
Creates attention layer
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Applies Alpha Dropout to the input.
Reduce elems using fn to combine them from left to right.
Returns the gradients of variables
w.r.t. loss
.
Reduce elems using fn to combine them from right to left.
3D convolution layer (e.g. spatial convolution over volumes).
Transposed 3D convolution layer (sometimes called Deconvolution).
Compute the moving average of a variable.
Repeats a 2D tensor.
Element-wise sigmoid.
Rectified linear unit.
Element-wise minimum of two tensors.
Initializer that generates a random orthogonal matrix.
Cumulative sum of the values in a tensor, alongside the specified axis.
3D deconvolution (i.e. transposed convolution).
Casts a tensor to a different dtype and returns it.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
Element-wise sign.
Selects x
in train phase, and alt
otherwise.
k_manual_variable_initialization
Sets the manual variable initialization flag.
Element-wise truth value of (x > y).
Returns the shape of tensor or variable as a list of int or NULL entries.
Depthwise 2D convolution with separable filters.
Returns a tensor with the same content as the input tensor.
Softsign of a tensor.
Softplus of a tensor.
Returns whether a tensor is a sparse tensor.
Default image data format convention ('channels_first' or 'channels_last').
Cast an array to the default Keras float type.
Element-wise inequality between two tensors.
Computes the one-hot representation of an integer tensor.
Map the function fn over the elements elems and return the outputs.
2D deconvolution (i.e. transposed convolution).
2D convolution.
Flatten a tensor.
Instantiates the EfficientNetB0 architecture
Instantiates the DenseNet architecture.
Instantiates the Xception architecture
(Deprecated) Fast GRU implementation backed by CuDNN . Keras backend tensor engine
Instantiates a placeholder tensor and returns it.
Permutes axes in a tensor.
Returns a tensor with truncated random normal distribution of values.
Add a densely-connected NN layer to an output
Unstack rank R
tensor into a list of rank R-1
tensors.
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
Repeats the elements of a tensor along an axis.
Reset graph identifiers.
1D Convolutional LSTM
Sets the value of a variable, from an R array.
A preprocessing layer which encodes integer features.
Instantiates a variable and returns it.
Clone a model instance.
(Deprecated) Evaluates the model on a data generator.
evaluate.keras.engine.training.Model
Evaluate a Keras model
Freeze and unfreeze weights
Retrieve the next item from a generator
Returns the symbolic shape of a tensor or variable.
Element-wise truth value of (x < y).
Instantiates a variable with values drawn from a normal distribution.
Element-wise truth value of (x <= y).
Returns whether x
is a symbolic tensor.
Default float type
Element-wise truth value of (x >= y).
Stacks a list of rank R
tensors into a rank R+1
tensor.
Keras Model composed of a linear stack of layers
(Deprecated) Fast LSTM implementation backed by CuDNN . Segment-wise linear approximation of sigmoid.
Instantiates an all-zeros variable and returns it.
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Scaled Exponential Linear Unit.
Layer that computes a dot product between samples in two tensors.
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
A preprocessing layer which buckets continuous features by ranges.
Returns variables
but with zero gradient w.r.t. every other variable.
Sum of the values in a tensor, alongside the specified axis.
Standard deviation of a tensor, alongside the specified axis.
Mean of a tensor, alongside the specified axis.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Returns the number of axes in a tensor, as an integer.
Minimum value in a tensor.
image_dataset_from_directory
Create a dataset from a directory
Keras implementation
Generate batches of image data with real-time data augmentation. The data will be
looped over (in batches).
Masks a sequence by using a mask value to skip timesteps.
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
Layer that multiplies (element-wise) a list of inputs.
Max pooling operation for temporal data.
layer_multi_head_attention
MultiHeadAttention layer
R interface to Keras
Returns a tensor with normal distribution of values.
Resizes the images contained in a 4D tensor.
Reshapes a tensor to the specified shape.
Update the value of x
by subtracting decrement
.
Instantiates an all-zeros variable of the same shape as another tensor.
Cell class for the GRU layer
Keras Model
Prints message
and the tensor value when evaluated.
Element-wise exponentiation.
Variance of a tensor, alongside the specified axis.
Apply an activation function to an output.
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
Image resizing layer
Base class for recurrent layers
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Reverse a tensor along the specified axes.
Resizes the volume contained in a 5D tensor.
Initializer that generates tensors initialized to a constant value.
Sets the learning phase to a fixed value.
Removes a 1-dimension from the tensor at index axis
.
2D convolution with separable filters.
Element-wise square.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Returns the value of more than one tensor variable.
Spatial 3D version of Dropout.
Cell class for the LSTM layer
Softmax activation function.
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
Layer that adds a list of inputs.
Additive attention layer, a.k.a. Bahdanau-style attention
(Deprecated) Create a Keras custom model
Convolutional LSTM.
Randomly flip each image horizontally and vertically
Randomly vary the height of a batch of images during training
Destroys the current TF graph and creates a new one.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Runs CTC loss algorithm on each batch element.
Pads the middle dimension of a 3D tensor.
Randomly crop the images to target height and width
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
Applies Dropout to the input.
Cell class for SimpleRNN
Fully-connected RNN where the output is to be fed back to input.
learning_rate_schedule_cosine_decay
A LearningRateSchedule that uses a cosine decay schedule
Main Keras module
Keras array object
Creates a tensor by tiling x
by n
.
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Average pooling operation for spatial data.
Rectified Linear Unit activation function
learning_rate_schedule_cosine_decay_restarts
A LearningRateSchedule that uses a cosine decay schedule with restarts
learning_rate_schedule_inverse_time_decay
A LearningRateSchedule that uses an inverse time decay schedule
learning_rate_schedule_exponential_decay
A LearningRateSchedule that uses an exponential decay schedule
Wrapper allowing a stack of RNN cells to behave as a single cell
Layer that concatenates a list of inputs.
Constructs a DenseFeatures.
Apply additive zero-centered Gaussian noise.
Crop the central portion of the images to target height and width
Average pooling operation for 3D data (spatial or spatio-temporal).
Turns positive integers (indexes) into dense vectors of fixed size.
Gated Recurrent Unit - Cho et al.
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
Decodes the output of a softmax.
A preprocessing layer which randomly adjusts brightness during training
Computes the hinge metric between y_true
and y_pred
Calculates the number of false positives
A preprocessing layer which hashes and bins categorical features.
Wraps arbitrary expression as a layer
A preprocessing layer which maps string features to integer indices.
Unit normalization layer
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
Layer that subtracts two inputs.
Adjust the contrast of an image or images by a random factor
Multiply inputs by scale
and adds offset
A preprocessing layer which randomly zooms images during training.
Multiplies 2 tensors (and/or variables) and returns a tensor .
Long Short-Term Memory unit - Hochreiter 1997.
Computes the logarithm of the hyperbolic cosine of the prediction error
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
Retrieves the elements of indices indices
in the tensor reference
.
Element-wise maximum of two tensors.
Returns whether x
is a placeholder.
Sets entries in x
to zero at random, while scaling the entire tensor.
Instantiates a Keras function
Maximum value in a tensor.
Returns whether x
is a Keras tensor.
Generates a word rank-based probabilistic sampling table.
2D convolution layer (e.g. spatial convolution over images).
Multiplies the values in a tensor, alongside the specified axis.
Reshapes an output to a certain shape.
A preprocessing layer which maps text features to integer sequences.
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Calculates how often predictions match binary labels
Computes the recall of the predictions with respect to the labels
metric-or-Metric
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
Repeats the input n times.
Loss functions
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
Computes the categorical hinge metric between y_true
and y_pred
Randomly rotate each image
Separable 2D convolution.
Computes the (weighted) mean of the given values
Returns a tensor with random binomial distribution of values.
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
Calculates the number of true negatives
metric_recall_at_precision
Computes best recall where precision is >= specified value
Computes the squared hinge metric
Transposed 2D convolution layer (sometimes called Deconvolution).
Stochastic gradient descent optimizer
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
Pads sequences to the same length
Adadelta optimizer.
Computes the (weighted) sum of the given values
Adagrad optimizer.
Depthwise separable 1D convolution.
Approximates the AUC (Area under the curve) of the ROC or PR curves
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
Model configuration as JSON
A regularizer that encourages input vectors to be orthogonal to each other
Reset the states for a layer
Upsampling layer for 1D inputs.
(Deprecated) loss_cosine_proximity
Save/Load model weights using HDF5 files
Save a text tokenizer to an external file
Save model weights in the SavedModel format
Convert a list of sequences into a matrix.
Flattens an input
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
learning_rate_schedule_piecewise_constant_decay
A LearningRateSchedule that uses a piecewise constant decay schedule
Apply multiplicative 1-centered Gaussian noise.
Cropping layer for 2D input (e.g. picture).
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
Upsampling layer for 2D inputs.
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
Returns a tensor with uniform distribution of values.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Computes the element-wise (weighted) mean of the given tensors
Calculates the number of true positives
(Deprecated) metric_cosine_proximity
Computes the mean Intersection-Over-Union metric
new_learning_rate_schedule_class
Create a new learning rate schedule type
Normalize a matrix or nd-array
Load a Keras model from the Saved Model format
Max pooling operation for spatial data.
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
Max pooling operation for 3D data (spatial or spatio-temporal).
Calculates how often predictions equal labels
Iterates over the time dimension of a tensor
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
metric_mean_squared_error
Computes the mean squared error between labels and predictions
predict.keras.engine.training.Model
Generate predictions from a Keras model
text_dataset_from_directory
Generate a tf.data.Dataset
from text files in a directory
Converts a text to a sequence of indexes in a fixed-size hashing space.
(Deprecated) Generates predictions for the input samples from a data generator.
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
Save/Load models using SavedModel format
A preprocessing layer which normalizes continuous features.
Element-wise rounding to the closest integer.
This layer wrapper allows to apply a layer to every temporal slice of an input
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Utility function for generating batches of temporal data.
timeseries_dataset_from_array
Creates a dataset of sliding windows over a timeseries provided as array
Convert text to a sequence of words (or tokens).
Pads 5D tensor with zeros along the depth, height, width dimensions.
Transposed 1D convolution layer (sometimes called Deconvolution).
Element-wise square root.
Converts a sparse tensor into a dense tensor and returns it.
Converts a class vector (integers) to binary class matrix.
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
(Deprecated) Export to Saved Model format
Model configuration as YAML
Assign values to names
Zero-padding layer for 3D data (spatial or spatio-temporal).
Transposes a tensor and returns it.
Layer that averages a list of inputs.
Update the value of x
to new_x
.
Update the value of x
by adding increment
.
Average pooling for temporal data.
(Deprecated) Replicates a model on different GPUs.
Cropping layer for 1D input (e.g. temporal sequence).
Nesterov Adam optimizer
RMSProp optimizer
learning_rate_schedule_polynomial_decay
A LearningRateSchedule that uses a polynomial decay schedule
Permute the dimensions of an input according to a given pattern
Zero-padding layer for 2D input (e.g. picture).
Save/Load models using HDF5 files
Single gradient update or model evaluation over one batch of samples.
Serialize a model to an R object
sequential_model_input_layer
sequential_model_input_layer
Select a Keras implementation and backend
Computes the Poisson metric between y_true
and y_pred
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Computes the cosine similarity between the labels and predictions
Calculates the number of false negatives
Computes the precision of the predictions with respect to the labels
A preprocessing layer which maps integer features to contiguous ranges.
1D convolution layer (e.g. temporal convolution).
3D Convolutional LSTM
Input layer
Depthwise separable 2D convolution.
Spatial 2D version of Dropout.
Text tokenization utility
Wraps a stateless metric function with the Mean metric
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
Adam optimizer
plot.keras_training_history
Plot training history
Adamax optimizer
Convert a list of texts to a matrix.
Returns predictions for a single batch of samples.
Remove the last layer in a model
(Deprecated) Generates probability or class probability predictions for the input samples.
Depthwise 1D convolution
Layer that computes the maximum (element-wise) a list of inputs.
Layer that computes the minimum (element-wise) a list of inputs.
Randomly translate each image during training
Provide a scope with mappings of names to custom objects
Spatial 1D version of Dropout.
Upsampling layer for 3D inputs.
Zero-padding layer for 1D input (e.g. temporal sequence).
Transform each text in texts in a sequence of integers.
Randomly vary the width of a batch of images during training
summary.keras.engine.training.Model
Print a summary of a Keras model
Pipe operator
Define new keras types
metric_precision_at_recall
Computes best precision where recall is >= specified value
Objects exported from other packages
plot.keras.engine.training.Model
Plot a Keras model
Generates skipgram word pairs.
L1 and L2 regularization
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
zip lists
(Deprecated) Base R6 class for Keras constraints
(Deprecated) Base R6 class for Keras callbacks
(Deprecated) Create a custom Layer