Create a custom Layer
Metric
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Fits the state of the preprocessing layer to the data being passed
(Deprecated) Base R6 class for Keras layers
(Deprecated) Base R6 class for Keras wrappers
Activation functions
Instantiates the DenseNet architecture.
(Deprecated) Base R6 class for Keras callbacks
(Deprecated) Base R6 class for Keras constraints
Bidirectional wrapper for RNNs.
VGG16 and VGG19 models for Keras.
ResNet50 model for Keras.
Xception V1 model for Keras.
Keras backend tensor engine
MobileNet model architecture.
Inception V3 model, with weights pre-trained on ImageNet.
Callback that prints metrics to stdout.
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
TensorBoard basic visualizations
Callback used to stream events to a server.
callback_learning_rate_scheduler
Learning rate scheduler.
callback_model_checkpoint
Save the model after every epoch.
Callback that streams epoch results to a csv file
MobileNetV2 model architecture
Instantiates a NASNet model.
Create a custom callback
Stop training when a monitored quantity has stopped improving.
Custom metric function
Create a Keras Layer wrapper
CIFAR100 small image classification
CIFAR10 small image classification
Boston housing price regression dataset
Reuters newswire topics classification
MNIST database of handwritten digits
Fit image data generator internal statistics to some sample data.
Fits the model on data yielded batch-by-batch by a generator.
Freeze and unfreeze weights
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
Generates batches of augmented/normalized data from image data and labels
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Retrieve tensors for layers with multiple nodes
(Deprecated) Create a Keras Wrapper
fit.keras.engine.training.Model
Train a Keras model
Retrieves a layer based on either its name (unique) or index.
Clone a model instance.
image_dataset_from_directory
Create a dataset from a directory
Retrieve the next item from a generator
compile.keras.engine.training.Model
Configure a Keras model for training
Layer/Model configuration
Initializer that generates tensors initialized to 1.
Bitwise reduction (logical AND).
Bitwise reduction (logical OR).
Initializer that generates a random orthogonal matrix.
Representation of HDF5 dataset to be used instead of an R array
Downloads a file from a URL if it not already in the cache.
Generate batches of image data with real-time data augmentation. The data will be
looped over (in batches).
Loads an image into PIL format.
Count the total number of scalars composing the weights.
IMDB Movie reviews sentiment classification
Fashion-MNIST database of fashion articles
Create a Keras Layer
Initializer that generates the identity matrix.
He uniform variance scaling initializer.
Weight constraints
evaluate.keras.engine.training.Model
Evaluate a Keras model
Adds a bias vector to a tensor.
Sets the values of many tensor variables at once.
Check if Keras is Available
Element-wise absolute value.
Evaluates the model on a data generator.
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
Keras implementation
Make a python class constructor
LeCun normal initializer.
Layer/Model weights as R arrays
He normal initializer.
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
1D convolution.
flow_images_from_dataframe
Takes the dataframe and the path to a directory and generates batches of
augmented/normalized data.
Binary crossentropy between an output tensor and a target tensor.
Cumulative product of the values in a tensor, alongside the specified axis.
Casts a tensor to a different dtype and returns it.
Initializer that generates tensors initialized to a constant value.
Returns the index of the minimum value along an axis.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
Active Keras backend
Element-wise value clipping.
Computes cos of x element-wise.
Destroys the current TF graph and creates a new one.
2D convolution.
3D deconvolution (i.e. transposed convolution).
initializer_lecun_uniform
LeCun uniform initializer.
Initializer that generates tensors initialized to 0.
3D convolution.
2D deconvolution (i.e. transposed convolution).
Sets entries in x
to zero at random, while scaling the entire tensor.
Cumulative sum of the values in a tensor, alongside the specified axis.
Fuzz factor used in numeric expressions.
Exponential linear unit.
Install TensorFlow and Keras, including all Python dependencies
Decodes the output of a softmax.
Applies batch normalization on x given mean, var, beta and gamma.
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
Returns the value of more than one tensor variable.
Multiplies 2 tensors (and/or variables) and returns a tensor .
Depthwise 2D convolution with separable filters.
Element-wise exponential.
Returns the dtype of a Keras tensor or variable, as a string.
Batchwise dot product.
Returns whether x
is a Keras tensor.
Returns the shape of tensor or variable as a list of int or NULL entries.
Computes log(sum(exp(elements across dimensions of a tensor))).
3D array representation of images
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
Instantiate an identity matrix and returns it.
Adds a 1-sized dimension at index axis
.
Flatten a tensor.
k_manual_variable_initialization
Sets the manual variable initialization flag.
Reduce elems using fn to combine them from right to left.
Returns a tensor with the same content as the input tensor.
TF session to be used by the backend.
Segment-wise linear approximation of sigmoid.
Retrieves the elements of indices indices
in the tensor reference
.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Instantiates a Keras function
Concatenates a list of tensors alongside the specified axis.
Get the uid for the default graph.
Returns the value of a variable.
Creates a constant tensor.
Default image data format convention ('channels_first' or 'channels_last').
Returns whether the targets
are in the top k
predictions
.
Returns a tensor with random binomial distribution of values.
Returns the gradients of variables
w.r.t. loss
.
Returns the shape of a variable.
Returns the index of the maximum value along an axis.
Creates a 1D tensor containing a sequence of integers.
Element-wise equality between two tensors.
Returns the static number of elements in a Keras variable or tensor.
Cast an array to the default Keras float type.
Runs CTC loss algorithm on each batch element.
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Returns the symbolic shape of a tensor or variable.
Returns whether x
is a placeholder.
Returns a tensor with normal distribution of values.
Selects x
in test phase, and alt
otherwise.
Returns whether x
is a symbolic tensor.
Element-wise truth value of (x < y).
Returns the learning phase flag.
Update the value of x
by subtracting decrement
.
Standard deviation of a tensor, alongside the specified axis.
Returns variables
but with zero gradient w.r.t. every other variable.
Element-wise sigmoid.
Returns whether a tensor is a sparse tensor.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Variance of a tensor, alongside the specified axis.
Map the function fn over the elements elems and return the outputs.
Element-wise truth value of (x <= y).
Selects x
in train phase, and alt
otherwise.
Instantiates a placeholder tensor and returns it.
2D Pooling.
Default float type
Evaluates the value of a variable.
Keras Model composed of a linear stack of layers
Apply an activation function to an output.
Scaled Exponential Linear Unit.
Reduce elems using fn to combine them from left to right.
Maximum value in a tensor.
Apply 1D conv with un-shared weights.
Mean of a tensor, alongside the specified axis.
Element-wise maximum of two tensors.
Softmax activation function.
Crop the central portion of the images to target height and width
Transposed 3D convolution layer (sometimes called Deconvolution).
A preprocessing layer which encodes integer features.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Convolutional LSTM.
Layer that computes a dot product between samples in two tensors.
A preprocessing layer which buckets continuous features by ranges.
Element-wise truth value of (x > y).
Element-wise truth value of (x >= y).
Instantiates an all-ones variable of the same shape as another tensor.
Permutes axes in a tensor.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Element-wise inequality between two tensors.
Apply 2D conv with un-shared weights.
Instantiates a variable with values drawn from a normal distribution.
Returns a tensor with uniform distribution of values.
Layer that computes the maximum (element-wise) a list of inputs.
A preprocessing layer which maps string features to integer indices.
Apply additive zero-centered Gaussian noise.
Repeats the input n times.
Multiply inputs by scale
and adds offset
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Layer that computes the minimum (element-wise) a list of inputs.
Minimum value in a tensor.
Rectified linear unit.
Resizes the volume contained in a 5D tensor.
Resizes the images contained in a 4D tensor.
Element-wise rounding to the closest integer.
Prints message
and the tensor value when evaluated.
Multiplies the values in a tensor, alongside the specified axis.
Repeats a 2D tensor.
Element-wise minimum of two tensors.
Softsign of a tensor.
Stacks a list of rank R
tensors into a rank R+1
tensor.
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Removes a 1-dimension from the tensor at index axis
.
Layer that subtracts two inputs.
Zero-padding layer for 1D input (e.g. temporal sequence).
Computes the one-hot representation of an integer tensor.
Element-wise log.
Instantiates an all-ones tensor variable and returns it.
Zero-padding layer for 2D input (e.g. picture).
3D Pooling.
Element-wise square root.
Repeats the elements of a tensor along an axis.
Element-wise square.
Creates a tensor by tiling x
by n
.
Computes the hinge metric between y_true
and y_pred
Pads the middle dimension of a 3D tensor.
Element-wise tanh.
metric_kullback_leibler_divergence
Computes Kullback-Leibler divergence
Element-wise exponentiation.
Reset graph identifiers.
metric_mean_absolute_error
Computes the mean absolute error between the labels and predictions
metric_sensitivity_at_specificity
Computes best sensitivity where specificity is >= specified value
metric_mean_absolute_percentage_error
Computes the mean absolute percentage error between y_true
and y_pred
Converts a sparse tensor into a dense tensor and returns it.
Creates attention layer
Compute the moving average of a variable.
2D convolution with separable filters.
Returns the number of axes in a tensor, as an integer.
metric_sparse_categorical_accuracy
Calculates how often predictions match integer labels
Softmax of a tensor.
Softplus of a tensor.
R interface to Keras
Instantiates an all-zeros variable of the same shape as another tensor.
Layer that averages a list of inputs.
Normalize a matrix or nd-array
Layer that adds a list of inputs.
Pads sequences to the same length
Iterates over the time dimension of a tensor
Reverse a tensor along the specified axes.
Adadelta optimizer.
Applies Alpha Dropout to the input.
Keras array object
Main Keras module
Exponential Linear Unit.
Layer that concatenates a list of inputs.
Sets the value of a variable, from an R array.
Sets the learning phase to a fixed value.
Pads the 2nd and 3rd dimensions of a 4D tensor.
Save/Load models using SavedModel format
Pipe operator
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
Average pooling operation for 3D data (spatial or spatio-temporal).
1D convolution layer (e.g. temporal convolution).
Transposes a tensor and returns it.
Pads 5D tensor with zeros along the depth, height, width dimensions.
Save/Load model weights using HDF5 files
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Transform each text in texts in a sequence of integers.
Create a Keras custom model
Keras Model
Returns a tensor with truncated random normal distribution of values.
Computes sin of x element-wise.
Element-wise sign.
Reshapes a tensor to the specified shape.
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
3D convolution layer (e.g. spatial convolution over volumes).
Fast LSTM implementation backed by CuDNN . Transposed 2D convolution layer (sometimes called Deconvolution).
Gated Recurrent Unit - Cho et al.
Max pooling operation for temporal data.
Masks a sequence by using a mask value to skip timesteps.
A preprocessing layer which randomly zooms images during training.
Upsampling layer for 2D inputs.
Randomly vary the width of a batch of images during training
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Average pooling for temporal data.
Rectified Linear Unit activation function
Average pooling operation for spatial data.
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
Add a densely-connected NN layer to an output
Sum of the values in a tensor, alongside the specified axis.
Fast GRU implementation backed by CuDNN . Flattens an input
Update the value of x
to new_x
.
Update the value of x
by adding increment
.
Switches between two operations depending on a scalar value.
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
A preprocessing layer which maps integer features to contiguous ranges.
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
Apply multiplicative 1-centered Gaussian noise.
Max pooling operation for spatial data.
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
Transposed 1D convolution layer (sometimes called Deconvolution).
Adjust the contrast of an image or images by a random factor
Max pooling operation for 3D data (spatial or spatio-temporal).
Randomly crop the images to target height and width
Constructs a DenseFeatures.
2D convolution layer (e.g. spatial convolution over images).
A preprocessing layer which maps text features to integer sequences.
Generates a word rank-based probabilistic sampling table.
(Deprecated) loss_cosine_proximity
Upsampling layer for 1D inputs.
Zero-padding layer for 3D data (spatial or spatio-temporal).
Loss functions
Upsampling layer for 3D inputs.
Depthwise separable 2D convolution.
Implements categorical feature hashing, also known as "hashing trick"
Calculates the number of false negatives
Calculates the number of false positives
Wraps a stateless metric function with the Mean metric
Computes the element-wise (weighted) mean of the given tensors
metric_precision_at_recall
Computes best precision where recall is >= specified value
Input layer
layer_layer_normalization
Layer normalization layer (Ba et al., 2016).
Randomly vary the height of a batch of images during training
Randomly flip each image horizontally and vertically
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
Instantiates a variable and returns it.
Wraps arbitrary expression as a layer
Instantiates an all-zeros variable and returns it.
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Computes the mean Intersection-Over-Union metric
Separable 2D convolution.
Spatial 1D version of Dropout.
metric_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Computes the categorical hinge metric between y_true
and y_pred
metric_specificity_at_sensitivity
Computes best specificity where sensitivity is >= specified value
Computes the recall of the predictions with respect to the labels
Computes the squared hinge metric
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
A preprocessing layer which normalizes continuous features.
Permute the dimensions of an input according to a given pattern
Randomly translate each image during training
Randomly rotate each image
Cropping layer for 1D input (e.g. temporal sequence).
Cropping layer for 2D input (e.g. picture).
Turns positive integers (indexes) into dense vectors of fixed size.
Applies Dropout to the input.
metric_mean_relative_error
Computes the mean relative error by normalizing with the given values
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
Returns predictions for a single batch of samples.
layer_multi_head_attention
MultiHeadAttention layer
Layer that multiplies (element-wise) a list of inputs.
Long Short-Term Memory unit - Hochreiter 1997.
Image resizing layer
Spatial 2D version of Dropout.
Reshapes an output to a certain shape.
Spatial 3D version of Dropout.
Approximates the AUC (Area under the curve) of the ROC or PR curves
metric-or-Metric
Serialize a model to an R object
(Deprecated) Generates probability or class probability predictions for the input samples.
Generates skipgram word pairs.
Calculates how often predictions match binary labels
Calculates the number of true negatives
metric_mean_squared_error
Computes the mean squared error between labels and predictions
metric_mean_squared_logarithmic_error
Computes the mean squared logarithmic error
Computes the cosine similarity between the labels and predictions
(Deprecated) metric_cosine_proximity
Computes the logarithm of the hyperbolic cosine of the prediction error
Computes the (weighted) mean of the given values
Calculates the number of true positives
Convert a list of texts to a matrix.
Text tokenization utility
metric_sparse_categorical_crossentropy
Computes the crossentropy metric between the labels and predictions
Calculates how often predictions equal labels
metric_categorical_accuracy
Calculates how often predictions match one-hot labels
metric_binary_crossentropy
Computes the crossentropy metric between the labels and predictions
metric_recall_at_precision
Computes best recall where precision is >= specified value
Adagrad optimizer.
Objects exported from other packages
Adam optimizer
metric_sparse_top_k_categorical_accuracy
Computes how often integer targets are in the top K
predictions
Export to Saved Model format
Model configuration as YAML
metric_root_mean_squared_error
Computes root mean squared error metric between y_true
and y_pred
L1 and L2 regularization
Fully-connected RNN where the output is to be fed back to input.
Reset the states for a layer
Apply a layer to every temporal slice of an input.
Save/Load models using HDF5 files
Depthwise separable 1D convolution.
Select a Keras implementation and backend
Utility function for generating batches of temporal data.
Provide a scope with mappings of names to custom objects
Computes the Poisson metric between y_true
and y_pred
RMSProp optimizer
Computes the precision of the predictions with respect to the labels
Save model weights in the SavedModel format
Stochastic gradient descent optimizer
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Convert text to a sequence of words (or tokens).
Save a text tokenizer to an external file
Replicates a model on different GPUs.
Computes the (weighted) sum of the given values
metric_top_k_categorical_accuracy
Computes how often targets are in the top K
predictions
Assign values to names
Adamax optimizer
Model configuration as JSON
Load a Keras model from the Saved Model format
Nesterov Adam optimizer
plot.keras_training_history
Plot training history
Remove the last layer in a model
Generates predictions for the input samples from a data generator.
sequential_model_input_layer
sequential_model_input_layer
Converts a text to a sequence of indexes in a fixed-size hashing space.
Single gradient update or model evaluation over one batch of samples.
Converts a class vector (integers) to binary class matrix.
summary.keras.engine.training.Model
Print a summary of a Keras model
predict.keras.engine.training.Model
Generate predictions from a Keras model
Convert a list of sequences into a matrix.