MobileNet model architecture.
application_inception_resnet_v2
Inception-ResNet v2 model, with weights trained on ImageNet
Instantiates a NASNet model.
Activation functions
ResNet50 model for Keras.
Base R6 class for Keras layers
Inception V3 model, with weights pre-trained on ImageNet.
Instantiates the DenseNet architecture.
Base R6 class for Keras callbacks
Base R6 class for Keras constraints
Callback that prints metrics to stdout.
callback_model_checkpoint
Save the model after every epoch.
callback_terminate_on_naan
Callback that terminates training when a NaN loss is encountered.
Boston housing price regression dataset
Bidirectional wrapper for RNNs.
Create a Keras Layer
VGG16 and VGG19 models for Keras.
Create a custom callback
Callback that streams epoch results to a csv file
callback_reduce_lr_on_plateau
Reduce learning rate when a metric has stopped improving.
Fashion-MNIST database of fashion articles
Keras backend tensor engine
Callback used to stream events to a server.
callback_learning_rate_scheduler
Learning rate scheduler.
Weight constraints
Xception V1 model for Keras.
Update tokenizer internal vocabulary based on a list of texts or list of
sequences.
Count the total number of scalars composing the weights.
Clone a model instance.
Generates batches of augmented/normalized data from image data and labels
TensorBoard basic visualizations
Retrieves a layer based on either its name (unique) or index.
Configure a Keras model for training
MNIST database of handwritten digits
flow_images_from_directory
Generates batches of data from images in a directory (with optional
augmented/normalized data)
Layer/Model weights as R arrays
IMDB Movie reviews sentiment classification
export_savedmodel.keras.engine.training.Model
Export a Saved Model
Freeze and unfreeze weights
Keras implementation
Reuters newswire topics classification
evaluate.keras.engine.training.Model
Evaluate a Keras model
Retrieve the next item from a generator
Evaluates the model on a data generator.
Layer/Model configuration
Initializer that generates tensors initialized to a constant value.
CIFAR10 small image classification
Stop training when a monitored quantity has stopped improving.
Downloads a file from a URL if it not already in the cache.
Representation of HDF5 dataset to be used instead of an R array
CIFAR100 small image classification
Initializer that generates the identity matrix.
Retrieve tensors for layers with multiple nodes
Generate minibatches of image data with real-time data augmentation.
Train a Keras model
Fits the model on data yielded batch-by-batch by a generator.
He normal initializer.
Fit image data generator internal statistics to some sample data.
imagenet_decode_predictions
Decodes the prediction of an ImageNet model.
Initializer that generates a random orthogonal matrix.
initializer_random_normal
Initializer that generates tensors with a normal distribution.
initializer_random_uniform
Initializer that generates tensors with a uniform distribution.
initializer_glorot_normal
Glorot normal initializer, also called Xavier normal initializer.
imagenet_preprocess_input
Preprocesses a tensor or array encoding a batch of images.
He uniform variance scaling initializer.
initializer_truncated_normal
Initializer that generates a truncated normal distribution.
initializer_lecun_uniform
LeCun uniform initializer.
Loads an image into PIL format.
Turn a nD tensor into a 2D tensor with same 1st dimension.
Initializer that generates tensors initialized to 1.
Element-wise absolute value.
Bitwise reduction (logical AND).
Returns the value of more than one tensor variable.
3D array representation of images
LeCun normal initializer.
Adds a bias vector to a tensor.
Binary crossentropy between an output tensor and a target tensor.
Install Keras and the TensorFlow backend
initializer_variance_scaling
Initializer capable of adapting its scale to the shape of weights.
Sets the values of many tensor variables at once.
Casts a tensor to a different dtype and returns it.
Element-wise value clipping.
Cast an array to the default Keras float type.
Initializer that generates tensors initialized to 0.
Check if Keras is Available
Concatenates a list of tensors alongside the specified axis.
Bitwise reduction (logical OR).
Returns the index of the maximum value along an axis.
Runs CTC loss algorithm on each batch element.
Applies batch normalization on x given mean, var, beta and gamma.
Creates a 1D tensor containing a sequence of integers.
Decodes the output of a softmax.
Returns the index of the minimum value along an axis.
Creates a constant tensor.
Multiplies 2 tensors (and/or variables) and returns a tensor .
k_categorical_crossentropy
Categorical crossentropy between an output tensor and a target tensor.
Adds a 1-sized dimension at index axis
.
1D convolution.
Returns whether a tensor is a sparse tensor.
Sets entries in x
to zero at random, while scaling the entire tensor.
Normalizes a tensor wrt the L2 norm alongside the specified axis.
Instantiate an identity matrix and returns it.
Destroys the current TF graph and creates a new one.
Reduce elems using fn to combine them from left to right.
Compute the moving average of a variable.
3D convolution.
TF session to be used by the backend.
Returns the number of axes in a tensor, as an integer.
Reduce elems using fn to combine them from right to left.
3D deconvolution (i.e. transposed convolution).
Get the uid for the default graph.
2D convolution.
Repeats a 2D tensor.
Fuzz factor used in numeric expressions.
Returns a tensor with the same content as the input tensor.
Repeats the elements of a tensor along an axis.
2D deconvolution (i.e. transposed convolution).
Instantiates a Keras function
Element-wise equality between two tensors.
Default image data format convention ('channels_first' or 'channels_last').
Cumulative sum of the values in a tensor, alongside the specified axis.
Retrieves the elements of indices indices
in the tensor reference
.
Element-wise truth value of (x <= y).
Returns whether x
is a Keras tensor.
Depthwise 2D convolution with separable filters.
Element-wise truth value of (x >= y).
Returns whether x
is a placeholder.
Returns the dtype of a Keras tensor or variable, as a string.
Apply 1D conv with un-shared weights.
Segment-wise linear approximation of sigmoid.
Computes log(sum(exp(elements across dimensions of a tensor))).
Map the function fn over the elements elems and return the outputs.
k_manual_variable_initialization
Sets the manual variable initialization flag.
initializer_glorot_uniform
Glorot uniform initializer, also called Xavier uniform initializer.
Maximum value in a tensor.
Exponential linear unit.
k_normalize_batch_in_training
Computes mean and std for batch then apply batch_normalization on batch.
Selects x
in test phase, and alt
otherwise.
Active Keras backend
Flatten a tensor.
Element-wise inequality between two tensors.
Batchwise dot product.
Returns whether the targets
are in the top k
predictions
.
Default float type
Apply 2D conv with un-shared weights.
Computes cos of x element-wise.
Returns the static number of elements in a Keras variable or tensor.
Element-wise rounding to the closest integer.
Instantiates a variable with values drawn from a normal distribution.
Returns the learning phase flag.
2D convolution with separable filters.
Returns the value of a variable.
Returns a tensor with uniform distribution of values.
Element-wise truth value of (x < y).
Returns the symbolic shape of a tensor or variable.
Resizes the images contained in a 4D tensor.
Returns the shape of a variable.
Reverse a tensor along the specified axes.
3D Pooling.
Resizes the volume contained in a 5D tensor.
Iterates over the time dimension of a tensor
Returns the gradients of variables
w.r.t. loss
.
Pads the 2nd and 3rd dimensions of a 4D tensor.
Element-wise exponentiation.
Softsign of a tensor.
Element-wise sigmoid.
Element-wise truth value of (x > y).
k_ctc_label_dense_to_sparse
Converts CTC labels from dense to sparse.
k_sparse_categorical_crossentropy
Categorical crossentropy with integer targets.
Standard deviation of a tensor, alongside the specified axis.
Stacks a list of rank R
tensors into a rank R+1
tensor.
Removes a 1-dimension from the tensor at index axis
.
Returns variables
but with zero gradient w.r.t. every other variable.
Cumulative product of the values in a tensor, alongside the specified axis.
Element-wise log.
Update the value of x
to new_x
.
Instantiates an all-zeros variable of the same shape as another tensor.
Evaluates the value of a variable.
Minimum value in a tensor.
Update the value of x
by adding increment
.
R interface to Keras
Element-wise minimum of two tensors.
Pads 5D tensor with zeros along the depth, height, width dimensions.
Exponential Linear Unit.
Element-wise exponential.
Layer that adds a list of inputs.
Applies Alpha Dropout to the input.
Element-wise tanh.
Selects x
in train phase, and alt
otherwise.
Cropping layer for 3D data (e.g. spatial or spatio-temporal).
layer_activation_leaky_relu
Leaky version of a Rectified Linear Unit.
Pads the middle dimension of a 3D tensor.
Returns the shape of tensor or variable as a list of int or NULL entries.
Fast GRU implementation backed by CuDNN . Average pooling operation for spatial data.
Element-wise maximum of two tensors.
Mean of a tensor, alongside the specified axis.
Creates a tensor by tiling x
by n
.
Prints message
and the tensor value when evaluated.
Average pooling operation for 3D data (spatial or spatio-temporal).
Instantiates a placeholder tensor and returns it.
Converts a sparse tensor into a dense tensor and returns it.
Multiplies the values in a tensor, alongside the specified axis.
2D Pooling.
Computes the one-hot representation of an integer tensor.
Sets the learning phase to a fixed value.
Reset graph identifiers.
Cropping layer for 1D input (e.g. temporal sequence).
Depthwise separable 2D convolution.
Returns a tensor with random binomial distribution of values.
Cropping layer for 2D input (e.g. picture).
Reshapes a tensor to the specified shape.
Layer that computes a dot product between samples in two tensors.
Apply additive zero-centered Gaussian noise.
Returns a tensor with normal distribution of values.
Element-wise sign.
layer_global_max_pooling_3d
Global Max pooling operation for 3D data.
layer_global_average_pooling_1d
Global average pooling operation for temporal data.
Element-wise square root.
Returns a tensor with truncated random normal distribution of values.
Computes sin of x element-wise.
Gated Recurrent Unit - Cho et al.
layer_activation_parametric_relu
Parametric Rectified Linear Unit.
Long Short-Term Memory unit - Hochreiter 1997.
Element-wise square.
Layer that averages a list of inputs.
Softmax activation function.
Masks a sequence by using a mask value to skip timesteps.
layer_batch_normalization
Batch normalization layer (Ioffe and Szegedy, 2014).
Average pooling for temporal data.
Transposes a tensor and returns it.
Sets the value of a variable, from an R array.
Sum of the values in a tensor, alongside the specified axis.
1D convolution layer (e.g. temporal convolution).
Softmax of a tensor.
Switches between two operations depending on a scalar value.
Instantiates an all-ones tensor variable and returns it.
Softplus of a tensor.
Instantiates a variable and returns it.
Layer that concatenates a list of inputs.
Instantiates an all-ones variable of the same shape as another tensor.
Flattens an input
Instantiates an all-zeros variable and returns it.
Max pooling operation for 3D data (spatial or spatio-temporal).
Permutes axes in a tensor.
Apply multiplicative 1-centered Gaussian noise.
Keras array object
layer_global_average_pooling_2d
Global average pooling operation for spatial data.
k_random_uniform_variable
Instantiates a variable with values drawn from a uniform distribution.
Layer that computes the maximum (element-wise) a list of inputs.
Keras Model
Update the value of x
by subtracting decrement
.
Rectified linear unit.
Spatial 3D version of Dropout.
Variance of a tensor, alongside the specified axis.
Transposed 2D convolution layer (sometimes called Deconvolution).
layer_locally_connected_1d
Locally-connected layer for 1D inputs.
Keras Model composed of a linear stack of layers
Layer that subtracts two inputs.
3D convolution layer (e.g. spatial convolution over volumes).
layer_locally_connected_2d
Locally-connected layer for 2D inputs.
Apply an activation function to an output.
Fully-connected RNN where the output is to be fed back to input.
layer_global_average_pooling_3d
Global Average pooling operation for 3D data.
Layer that computes the minimum (element-wise) a list of inputs.
Pads sequences to the same length
Zero-padding layer for 2D input (e.g. picture).
Max pooling operation for temporal data.
Layer that multiplies (element-wise) a list of inputs.
Spatial 1D version of Dropout.
Zero-padding layer for 3D data (spatial or spatio-temporal).
Max pooling operation for spatial data.
Upsampling layer for 1D inputs.
Spatial 2D version of Dropout.
Stochastic gradient descent optimizer
Depthwise separable 2D convolution.
Model configuration as YAML
Upsampling layer for 2D inputs.
layer_activation_thresholded_relu
Thresholded Rectified Linear Unit.
Convolutional LSTM.
Model performance metrics
Upsampling layer for 3D inputs.
Fast LSTM implementation backed by CuDNN . Model configuration as JSON
layer_activity_regularization
Layer that applies an update to the cost function based input activity.
Zero-padding layer for 1D input (e.g. temporal sequence).
Replicates a model on different GPUs.
Add a densely-connected NN layer to an output
Transposed 3D convolution layer (sometimes called Deconvolution).
Adadelta optimizer.
Input layer
2D convolution layer (e.g. spatial convolution over images).
Convert text to a sequence of words (or tokens).
Wraps arbitrary expression as a layer
Applies Dropout to the input.
Adagrad optimizer.
Permute the dimensions of an input according to a given pattern
Turns positive integers (indexes) into dense vectors of fixed size.
Save a text tokenizer to an external file
Remove the last layer in a model
layer_global_max_pooling_1d
Global max pooling operation for temporal data.
Normalize a matrix or nd-array
Convert a list of sequences into a matrix.
Generates predictions for the input samples from a data generator.
layer_global_max_pooling_2d
Global max pooling operation for spatial data.
predict.keras.engine.training.Model
Generate predictions from a Keras model
Returns predictions for a single batch of samples.
Select a Keras implementation and backend
Model loss functions
One-hot encode a text into a list of word indexes in a vocabulary of size n.
Assign values to names
Generates a word rank-based probabilistic sampling table.
Provide a scope with mappings of names to custom objects
Apply a layer to every temporal slice of an input.
L1 and L2 regularization
Adam optimizer
summary.keras.engine.training.Model
Print a summary of a Keras model
Adamax optimizer
Reset the states for a layer
Utility function for generating batches of temporal data.
Repeats the input n times.
Converts a text to a sequence of indexes in a fixed-size hashing space.
Pipe operator
Nesterov Adam optimizer
plot.keras_training_history
Plot training history
Reshapes an output to a certain shape.
Transform each text in texts in a sequence of integers.
RMSProp optimizer
Serialize a model to an R object
Depthwise separable 1D convolution.
texts_to_sequences_generator
Transforms each text in texts in a sequence of integers.
Save/Load model weights using HDF5 files
Generates probability or class probability predictions for the input samples.
Converts a class vector (integers) to binary class matrix.
Generates skipgram word pairs.
Single gradient update or model evaluation over one batch of samples.
Text tokenization utility
Objects exported from other packages
Convert a list of texts to a matrix.
Save/Load models using HDF5 files