kerasR (version 0.6.1)

Dense: Regular, densely-connected NN layer.

Description

Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel.

Usage

Dense(units, activation = "linear", use_bias = TRUE,
  kernel_initializer = "glorot_uniform", bias_initializer = "zeros",
  kernel_regularizer = NULL, bias_regularizer = NULL,
  activity_regularizer = NULL, kernel_constraint = NULL,
  bias_constraint = NULL, input_shape = NULL)

Arguments

units

Positive integer, dimensionality of the output space.

activation

The activation function to use.

use_bias

Boolean, whether the layer uses a bias vector.

kernel_initializer

Initializer for the kernel weights matrix

bias_initializer

Initializer for the bias vector

kernel_regularizer

Regularizer function applied to the kernel weights matrix

bias_regularizer

Regularizer function applied to the bias vector

activity_regularizer

Regularizer function applied to the output of the layer (its "activation").

kernel_constraint

Constraint function applied to the

bias_constraint

Constraint function applied to the bias vector

input_shape

only need when first layer of a model; sets the input shape of the data

Author

Taylor B. Arnold, taylor.arnold@acm.org

References

Chollet, Francois. 2015. Keras: Deep Learning library for Theano and TensorFlow.

See Also

Other layers: Activation, ActivityRegularization, AdvancedActivation, BatchNormalization, Conv, Dropout, Embedding, Flatten, GaussianNoise, LayerWrapper, LocallyConnected, Masking, MaxPooling, Permute, RNN, RepeatVector, Reshape, Sequential

Examples

Run this code
if(keras_available()) {
  X_train <- matrix(rnorm(100 * 10), nrow = 100)
  Y_train <- to_categorical(matrix(sample(0:2, 100, TRUE), ncol = 1), 3)

  mod <- Sequential()
  mod$add(Dense(units = 50, input_shape = dim(X_train)[2]))
  mod$add(Dropout(rate = 0.5))
  mod$add(Activation("relu"))
  mod$add(Dense(units = 3))
  mod$add(ActivityRegularization(l1 = 1))
  mod$add(Activation("softmax"))
  keras_compile(mod,  loss = 'categorical_crossentropy', optimizer = RMSprop())

  keras_fit(mod, X_train, Y_train, batch_size = 32, epochs = 5,
            verbose = 0, validation_split = 0.2)
}

if(keras_available()) {
  X_train <- matrix(rnorm(100 * 10), nrow = 100)
  Y_train <- to_categorical(matrix(sample(0:2, 100, TRUE), ncol = 1), 3)

  mod <- Sequential()
  mod$add(Dense(units = 50, input_shape = dim(X_train)[2]))
  mod$add(Dropout(rate = 0.5))
  mod$add(Activation("relu"))
  mod$add(Dense(units = 3))
  mod$add(ActivityRegularization(l1 = 1))
  mod$add(Activation("softmax"))
  keras_compile(mod,  loss = 'categorical_crossentropy', optimizer = RMSprop())

  keras_fit(mod, X_train, Y_train, batch_size = 32, epochs = 5,
            verbose = 0, validation_split = 0.2)
}

Run the code above in your browser using DataCamp Workspace