Learn R Programming

⚠️There's a newer version (0.14.2) of this package.Take me there.

torch

Installation

torch can be installed from CRAN with:

install.packages("torch")

You can also install the development version with:

remotes::install_github("mlverse/torch")

At the first package load additional software will be installed.

Installation with Docker

If you would like to install with Docker, please read following document.

Examples

You can create torch tensors from R objects with the torch_tensor function and convert them back to R objects with as_array.

library(torch)
x <- array(runif(8), dim = c(2, 2, 2))
y <- torch_tensor(x, dtype = torch_float64())
y
#> torch_tensor
#> (1,.,.) = 
#>   0.7658  0.6123
#>   0.3150  0.4639
#> 
#> (2,.,.) = 
#>   0.0604  0.0290
#>   0.9553  0.6541
#> [ CPUDoubleType{2,2,2} ]
identical(x, as_array(y))
#> [1] TRUE

Simple Autograd Example

In the following snippet we let torch, using the autograd feature, calculate the derivatives:

x <- torch_tensor(1, requires_grad = TRUE)
w <- torch_tensor(2, requires_grad = TRUE)
b <- torch_tensor(3, requires_grad = TRUE)
y <- w * x + b
y$backward()
x$grad
#> torch_tensor
#>  2
#> [ CPUFloatType{1} ]
w$grad
#> torch_tensor
#>  1
#> [ CPUFloatType{1} ]
b$grad
#> torch_tensor
#>  1
#> [ CPUFloatType{1} ]

Contributing

No matter your current skills it’s possible to contribute to torch development. See the contributing guide for more information.

Copy Link

Version

Install

install.packages('torch')

Monthly Downloads

11,102

Version

0.8.0

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Daniel Falbel

Last Published

June 9th, 2022

Functions in torch (0.8.0)

nnf_hardsigmoid

Hardsigmoid
linalg_lstsq

Computes a solution to the least squares problem of a system of linear equations.
enumerate.dataloader

Enumerate an iterator
is_torch_device

Checks if object is a device
autograd_function

Records operation history and defines formulas for differentiating ops.
nn_init_trunc_normal_

Truncated normal initialization
autograd_backward

Computes the sum of gradients of given tensors w.r.t. graph leaves.
linalg_inv_ex

Computes the inverse of a square matrix if it is invertible.
is_torch_dtype

Check if object is a torch data type
linalg_multi_dot

Efficiently multiplies two or more matrices
dataloader

Data loader. Combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset.
distr_mixture_same_family

Mixture of components in the same family
dataloader_make_iter

Creates an iterator from a DataLoader
nn_batch_norm2d

BatchNorm2D
call_torch_function

Call a (Potentially Unexported) Torch Function
contrib_sort_vertices

Contrib sort vertices
nn_ctc_loss

The Connectionist Temporal Classification loss.
linalg_cholesky

Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.
jit_tuple

Adds the 'jit_tuple' class to the input
nn_lp_pool1d

Applies a 1D power-average pooling over an input signal composed of several input planes.
backends_cudnn_is_available

CuDNN is available
backends_cudnn_version

CuDNN version
nn_group_norm

Group normalization
enumerate

Enumerate an iterator
nn_fractional_max_pool3d

Applies a 3D fractional max pooling over an input signal composed of several input planes.
nn_prune_head

Prune top layer(s) of a network
torch_set_default_dtype

Gets and sets the default floating point dtype.
nn_conv_transpose1d

ConvTranspose1D
nn_lp_pool2d

Applies a 2D power-average pooling over an input signal composed of several input planes.
nnf_pairwise_distance

Pairwise_distance
Constraint

Abstract base class for constraints.
cuda_get_device_capability

Returns the major and minor CUDA capability of device
jit_save

Saves a script_function to a path
nn_bce_loss

Binary cross entropy loss
is_undefined_tensor

Checks if a tensor is undefined
nn_adaptive_avg_pool1d

Applies a 1D adaptive average pooling over an input signal composed of several input planes.
cuda_is_available

Returns a bool indicating if CUDA is currently available.
dataset_subset

Dataset Subset
torch_conv_transpose1d

Conv_transpose1d
is_torch_qscheme

Checks if an object is a QScheme
linalg_matrix_rank

Computes the numerical rank of a matrix.
linalg_cholesky_ex

Computes the Cholesky decomposition of a complex Hermitian or real symmetric positive-definite matrix.
nn_cross_entropy_loss

CrossEntropyLoss module
nn_max_pool1d

MaxPool1D module
linalg_svd

Computes the singular value decomposition (SVD) of a matrix.
jit_trace_module

Trace a module
nn_dropout

Dropout module
lr_step

Step learning rate decay
nn_elu

ELU module
AutogradContext

Class representing the context.
jit_load

Loads a script_function or script_module previously saved with jit_save
as_array

Converts to array
nn_identity

Identity module
nn_init_orthogonal_

Orthogonal initialization
linalg_solve

Computes the solution of a square system of linear equations with a unique solution.
nn_hinge_embedding_loss

Hinge embedding loss
jit_compile

Compile TorchScript code into a graph
nn_init_sparse_

Sparse initialization
nnf_conv2d

Conv2d
nn_sigmoid

Sigmoid module
nn_relu

ReLU module
nn_init_zeros_

Zeros initialization
nn_init_uniform_

Uniform initialization
linalg_cond

Computes the condition number of a matrix with respect to a matrix norm.
nn_kl_div_loss

Kullback-Leibler divergence loss
nn_conv3d

Conv3D module
nn_embedding_bag

Embedding bag module
nn_multilabel_soft_margin_loss

Multi label soft margin loss
nnf_adaptive_avg_pool3d

Adaptive_avg_pool3d
nn_poisson_nll_loss

Poisson NLL loss
nn_bce_with_logits_loss

BCE with logits loss
nn_dropout2d

Dropout2D module
nn_conv2d

Conv2D module
nn_init_kaiming_uniform_

Kaiming uniform initialization
is_nn_parameter

Checks if an object is a nn_parameter
lr_scheduler

Creates learning rate schedulers
nnf_pdist

Pdist
nnf_conv1d

Conv1d
cuda_device_count

Returns the number of GPUs available.
install_torch

Install Torch
get_install_libs_url

List of files to download
cuda_current_device

Returns the index of a currently selected device.
nnf_multilabel_margin_loss

Multilabel_margin_loss
torch_broadcast_tensors

Broadcast_tensors
nn_embedding

Embedding module
install_torch_from_file

Install Torch from files
nnf_avg_pool3d

Avg_pool3d
nnf_adaptive_avg_pool2d

Adaptive_avg_pool2d
nn_sequential

A sequential container
nn_fractional_max_pool2d

Applies a 2D fractional max pooling over an input signal composed of several input planes.
Distribution

Generic R6 class representing distributions
is_dataloader

Checks if the object is a dataloader
distr_gamma

Creates a Gamma distribution parameterized by shape concentration and rate.
linalg_vector_norm

Computes a vector norm.
distr_chi2

Creates a Chi2 distribution parameterized by shape parameter df. This is exactly equivalent to distr_gamma(alpha=0.5*df, beta=0.5)
nn_batch_norm1d

BatchNorm1D module
dataloader_next

Get the next element of a dataloader iterator
dataset

Helper function to create an function that generates R6 instances of class dataset
nn_init_kaiming_normal_

Kaiming normal initialization
nn_flatten

Flattens a contiguous range of dims into a tensor.
is_optimizer

Checks if the object is a torch optimizer
nn_smooth_l1_loss

Smooth L1 loss
linalg_svdvals

Computes the singular values of a matrix.
torch_cumsum

Cumsum
is_torch_layout

Check if an object is a torch layout.
nn_gru

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
nnf_batch_norm

Batch_norm
jit_trace

Trace a function and return an executable script_function.
nn_prelu

PReLU module
nnf_avg_pool2d

Avg_pool2d
nn_tanhshrink

Tanhshrink module
nnf_hardswish

Hardswish
nnf_layer_norm

Layer_norm
linalg_det

Computes the determinant of a square matrix.
linalg_matrix_norm

Computes a matrix norm.
linalg_tensorinv

Computes the multiplicative inverse of torch_tensordot()
lr_reduce_on_plateau

Reduce learning rate on plateau
nnf_elu

Elu
broadcast_all

Given a list of values (possibly containing numbers), returns a list where each value is broadcasted based on the following rules:
nn_hardtanh

Hardtanh module
linalg_eigvals

Computes the eigenvalues of a square matrix.
nn_celu

CELU module
nn_max_pool2d

MaxPool2D module
is_nn_buffer

Checks if the object is a nn_buffer
backends_openmp_is_available

OpenMP is available
nnf_softshrink

Softshrink
nn_nll_loss

Nll loss
nn_max_unpool3d

Computes a partial inverse of MaxPool3d.
linalg_matrix_power

Computes the n-th power of a square matrix for an integer n.
nn_tanh

Tanh module
nn_utils_rnn_pad_sequence

Pad a list of variable length Tensors with padding_value
nn_log_softmax

LogSoftmax module
nn_avg_pool3d

Applies a 3D average pooling over an input signal composed of several input planes.
nn_buffer

Creates a nn_buffer
nn_hardswish

Hardswish module
linalg_slogdet

Computes the sign and natural logarithm of the absolute value of the determinant of a square matrix.
nn_gelu

GELU module
linalg_tensorsolve

Computes the solution X to the system torch_tensordot(A, X) = B.
linalg_eigvalsh

Computes the eigenvalues of a complex Hermitian or real symmetric matrix.
nn_max_unpool2d

Computes a partial inverse of MaxPool2d.
nn_conv1d

Conv1D module
nn_softsign

Softsign module
autograd_grad

Computes and returns the sum of gradients of outputs w.r.t. the inputs.
cuda_memory_stats

Returns a dictionary of CUDA memory allocator statistics for a given device.
nn_adaptive_avg_pool3d

Applies a 3D adaptive average pooling over an input signal composed of several input planes.
nnf_dropout2d

Dropout2d
nn_adaptive_max_pool1d

Applies a 1D adaptive max pooling over an input signal composed of several input planes.
torch_addmv

Addmv
nn_lstm

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
nn_leaky_relu

LeakyReLU module
nnf_adaptive_max_pool3d

Adaptive_max_pool3d
nn_init_dirac_

Dirac initialization
distr_poisson

Creates a Poisson distribution parameterized by rate, the rate parameter.
nn_glu

GLU module
nnf_one_hot

One_hot
nnf_dropout3d

Dropout3d
torch_arctanh

Arctanh
distr_normal

Creates a normal (also called Gaussian) distribution parameterized by loc and scale.
nn_utils_clip_grad_norm_

Clips gradient norm of an iterable of parameters.
nnf_affine_grid

Affine_grid
cuda_runtime_version

Returns the CUDA runtime version
nnf_alpha_dropout

Alpha_dropout
nn_linear

Linear module
optimizer

Creates a custom optimizer
nn_init_eye_

Eye initialization
distr_bernoulli

Creates a Bernoulli distribution parameterized by probs or logits (but not both). Samples are binary (0 or 1). They take the value 1 with probability p and 0 with probability 1 - p.
linalg_eig

Computes the eigenvalue decomposition of a square matrix if it exists.
optim_sgd

SGD optimizer
nnf_leaky_relu

Leaky_relu
nn_margin_ranking_loss

Margin ranking loss
nn_adaptive_avg_pool2d

Applies a 2D adaptive average pooling over an input signal composed of several input planes.
nn_layer_norm

Layer normalization
distr_multivariate_normal

Gaussian distribution
nn_batch_norm3d

BatchNorm3D
distr_categorical

Creates a categorical distribution parameterized by either probs or logits (but not both).
lr_one_cycle

Once cycle learning rate
nnf_adaptive_avg_pool1d

Adaptive_avg_pool1d
autograd_set_grad_mode

Set grad mode
is_nn_module

Checks if the object is an nn_module
jit_scalar

Adds the 'jit_scalar' class to the input
torch_floor

Floor
backends_mkl_is_available

MKL is available
nn_bilinear

Bilinear module
backends_mkldnn_is_available

MKLDNN is available
nn_contrib_sparsemax

Sparsemax activation
nn_rrelu

RReLU module
torch_argmax

Argmax
nn_softmax

Softmax module
nnf_conv_transpose3d

Conv_transpose3d
linalg_eigh

Computes the eigenvalue decomposition of a complex Hermitian or real symmetric matrix.
nnf_bilinear

Bilinear
torch_amax

Amax
nnf_multilabel_soft_margin_loss

Multilabel_soft_margin_loss
torch_argsort

Argsort
nn_adaptive_log_softmax_with_loss

AdaptiveLogSoftmaxWithLoss module
nn_init_ones_

Ones initialization
nn_upsample

Upsample module
nn_soft_margin_loss

Soft margin loss
torch_argmin

Argmin
nnf_embedding

Embedding
nnf_cosine_embedding_loss

Cosine_embedding_loss
torch_bmm

Bmm
torch_addr

Addr
torch_clip

Clip
nn_selu

SELU module
nnf_binary_cross_entropy_with_logits

Binary_cross_entropy_with_logits
nnf_avg_pool1d

Avg_pool1d
linalg_qr

Computes the QR decomposition of a matrix.
torch_chunk

Chunk
torch_bitwise_not

Bitwise_not
torch_diff

Computes the n-th forward difference along the given dimension.
nn_l1_loss

L1 loss
load_state_dict

Load a state dict file
lr_multiplicative

Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr.
linalg_norm

Computes a vector or matrix norm.
nnf_relu

Relu
nn_module

Base class for all neural network modules.
nnf_cosine_similarity

Cosine_similarity
nn_unflatten

Unflattens a tensor dim expanding it to a desired shape. For use with [nn_sequential.
nnf_binary_cross_entropy

Binary_cross_entropy
torch_poisson

Poisson
nn_utils_rnn_pack_sequence

Packs a list of variable length Tensors
nn_softplus

Softplus module
nn_pairwise_distance

Pairwise distance
nn_dropout3d

Dropout3D module
linalg_pinv

Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.
nnf_conv_tbc

Conv_tbc
nn_conv_transpose3d

ConvTranpose3D module
nnf_margin_ranking_loss

Margin_ranking_loss
torch_block_diag

Block_diag
nnf_pixel_shuffle

Pixel_shuffle
nn_init_normal_

Normal initialization
torch_empty

Empty
torch_conv_transpose2d

Conv_transpose2d
torch_cholesky_solve

Cholesky_solve
torch_cumprod

Cumprod
torch_bitwise_and

Bitwise_and
nnf_max_unpool2d

Max_unpool2d
torch_floor_divide

Floor_divide
nn_hardsigmoid

Hardsigmoid module
nnf_gelu

Gelu
nnf_logsigmoid

Logsigmoid
nnf_max_unpool1d

Max_unpool1d
torch_expm1

Expm1
nnf_softplus

Softplus
torch_avg_pool1d

Avg_pool1d
torch_digamma

Digamma
nnf_log_softmax

Log_softmax
nn_avg_pool2d

Applies a 2D average pooling over an input signal composed of several input planes.
torch_adaptive_avg_pool1d

Adaptive_avg_pool1d
nnf_softmin

Softmin
torch_equal

Equal
nnf_max_pool1d

Max_pool1d
torch_index_put_

In-place version of torch_index_put.
torch_mul

Mul
torch_fft_rfft

Rfft
sampler

Creates a new Sampler
torch_asinh

Asinh
is_torch_memory_format

Check if an object is a memory format
lr_lambda

Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.
torch_index_select

Index_select
nnf_cross_entropy

Cross_entropy
nnf_poisson_nll_loss

Poisson_nll_loss
torch_empty_like

Empty_like
linalg_householder_product

Computes the first n columns of a product of Householder matrices.
torch_cholesky

Cholesky
nnf_glu

Glu
nnf_softmax

Softmax
%>%

Pipe operator
nnf_prelu

Prelu
torch_arcsin

Arcsin
torch_bartlett_window

Bartlett_window
torch_symeig

Symeig
jit_save_for_mobile

Saves a script_function or script_module in bytecode form, to be loaded on a mobile device
nn_multihead_attention

MultiHead attention
linalg_inv

Computes the inverse of a square matrix if it exists.
reexports

Re-exporting the as_iterator function.
nn_module_list

Holds submodules in a list.
torch_conv1d

Conv1d
optim_lbfgs

LBFGS optimizer
nn_utils_rnn_pack_padded_sequence

Packs a Tensor containing padded sequences of variable length.
slc

Creates a slice
optim_required

Dummy value indicating a required value.
torch_atleast_3d

Atleast_3d
torch_atan

Atan
nn_avg_pool1d

Applies a 1D average pooling over an input signal composed of several input planes.
torch_allclose

Allclose
torch_cosh

Cosh
torch_clamp

Clamp
torch_eye

Eye
torch_erf

Erf
nnf_l1_loss

L1_loss
nn_adaptive_max_pool3d

Applies a 3D adaptive max pooling over an input signal composed of several input planes.
nnf_pad

Pad
optim_adadelta

Adadelta optimizer
torch_fft_irfft

Irfft
nnf_embedding_bag

Embedding_bag
torch_t

T
nn_conv_transpose2d

ConvTranpose2D module
nnf_contrib_sparsemax

Sparsemax
nnf_hardshrink

Hardshrink
nn_mse_loss

MSE loss
nn_parameter

Creates an nn_parameter
nn_log_sigmoid

LogSigmoid module
nn_adaptive_max_pool2d

Applies a 2D adaptive max pooling over an input signal composed of several input planes.
nnf_lp_pool1d

Lp_pool1d
nn_softmin

Softmin
torch_ceil

Ceil
torch_blackman_window

Blackman_window
torch_log2

Log2
torch_min

Min
nn_max_unpool1d

Computes a partial inverse of MaxPool1d.
nnf_hardtanh

Hardtanh
nn_relu6

ReLu6 module
nn_utils_clip_grad_value_

Clips gradient of an iterable of parameters at specified value.
nnf_mse_loss

Mse_loss
nn_multi_margin_loss

Multi margin loss
nnf_celu

Celu
torch_gt

Gt
nn_init_constant_

Constant initialization
nn_cosine_embedding_loss

Cosine embedding loss
nn_hardshrink

Hardshwink module
torch_logical_and

Logical_and
torch_lerp

Lerp
optim_adagrad

Adagrad optimizer
nnf_adaptive_max_pool1d

Adaptive_max_pool1d
nnf_gumbel_softmax

Gumbel_softmax
nn_triplet_margin_with_distance_loss

Triplet margin with distance loss
nn_utils_rnn_pad_packed_sequence

Pads a packed batch of variable length sequences.
torch_isreal

Isreal
torch_fliplr

Fliplr
torch_lu

LU
torch_logaddexp

Logaddexp
torch_dist

Dist
torch_diagflat

Diagflat
torch_cosine_similarity

Cosine_similarity
torch_complex

Complex
nnf_triplet_margin_with_distance_loss

Triplet margin with distance loss
nn_init_calculate_gain

Calculate gain
nn_softmax2d

Softmax2d module
nn_init_xavier_uniform_

Xavier uniform initialization
torch_relu_

Relu_
torch_atleast_2d

Atleast_2d
nn_init_xavier_normal_

Xavier normal initialization
nnf_fractional_max_pool2d

Fractional_max_pool2d
nnf_max_unpool3d

Max_unpool3d
torch_add

Add
torch_isposinf

Isposinf
torch_isneginf

Isneginf
torch_hamming_window

Hamming_window
torch_div

Div
torch_less

Less
torch_greater

Greater
nnf_conv3d

Conv3d
torch_iinfo

Integer type info
torch_lt

Lt
torch_logcumsumexp

Logcumsumexp
nnf_multi_head_attention_forward

Multi head attention forward
torch_baddbmm

Baddbmm
torch_atleast_1d

Atleast_1d
torch_flipud

Flipud
torch_arccosh

Arccosh
torch_fmod

Fmod
torch_normal

Normal
torch_load

Loads a saved object
nn_max_pool3d

Applies a 3D max pooling over an input signal composed of several input planes.
nn_multilabel_margin_loss

Multilabel margin loss
nnf_dropout

Dropout
torch_frac

Frac
torch_addbmm

Addbmm
nnf_fractional_max_pool3d

Fractional_max_pool3d
torch_is_complex

Is_complex
torch_sigmoid

Sigmoid
nnf_kl_div

Kl_div
torch_greater_equal

Greater_equal
nn_rnn

RNN module
torch_addcdiv

Addcdiv
nn_softshrink

Softshrink module
torch_gather

Gather
torch_index

Index torch tensors
torch_linspace

Linspace
nnf_sigmoid

Sigmoid
torch_chain_matmul

Chain_matmul
torch_imag

Imag
torch_is_floating_point

Is_floating_point
torch_istft

Istft
torch_diagonal

Diagonal
torch_cross

Cross
torch_cdist

Cdist
nnf_group_norm

Group_norm
torch_cos

Cos
torch_logdet

Logdet
torch_reduction

Creates the reduction objet
torch_minimum

Minimum
torch_repeat_interleave

Repeat_interleave
torch_geqrf

Geqrf
torch_cholesky_inverse

Cholesky_inverse
torch_norm

Norm
torch_reshape

Reshape
torch_movedim

Movedim
torch_quantize_per_tensor

Quantize_per_tensor
torch_conj

Conj
torch_erfc

Erfc
torch_count_nonzero

Count_nonzero
torch_unsafe_chunk

Unsafe_chunk
torch_log1p

Log1p
torch_tanh

Tanh
torch_quantize_per_channel

Quantize_per_channel
nnf_lp_pool2d

Lp_pool2d
nnf_conv_transpose1d

Conv_transpose1d
torch_prod

Prod
with_enable_grad

Enable grad
torch_reciprocal

Reciprocal
torch_divide

Divide
torch_narrow

Narrow
torch_ne

Ne
torch_ones

Ones
torch_not_equal

Not_equal
torch_pow

Pow
torch_relu

Relu
nnf_linear

Linear
torch_ger

Ger
torch_dot

Dot
torch_index_put

Modify values selected by indices.
torch_erfinv

Erfinv
torch_tan

Tan
torch_svd

Svd
nnf_adaptive_max_pool2d

Adaptive_max_pool2d
nn_threshold

Threshold module
nnf_max_pool2d

Max_pool2d
nn_triplet_margin_loss

Triplet margin loss
torch_conv_tbc

Conv_tbc
nnf_multi_margin_loss

Multi_margin_loss
nnf_softsign

Softsign
torch_unsafe_split

Unsafe_split
nnf_selu

Selu
torch_atanh

Atanh
with_no_grad

Temporarily modify gradient recording.
torch_trunc

Trunc
torch_negative

Negative
nnf_ctc_loss

Ctc_loss
torch_manual_seed

Sets the seed for generating random numbers.
nnf_unfold

Unfold
torch_maximum

Maximum
nnf_local_response_norm

Local_response_norm
torch_var_mean

Var_mean
nnf_relu6

Relu6
torch_gcd

Gcd
torch_var

Var
threads

Number of threads
tensor_dataset

Dataset wrapping tensors.
nnf_tanhshrink

Tanhshrink
torch_rsqrt

Rsqrt
torch_rrelu_

Rrelu_
torch_pdist

Pdist
torch_rand

Rand
torch_isinf

Isinf
torch_logaddexp2

Logaddexp2
torch_isnan

Isnan
torch_arcsinh

Arcsinh
nnf_conv_transpose2d

Conv_transpose2d
torch_sign

Sign
torch_atan2

Atan2
nnf_fold

Fold
torch_arctan

Arctan
nnf_smooth_l1_loss

Smooth_l1_loss
nnf_interpolate

Interpolate
torch_max

Max
torch_kaiser_window

Kaiser_window
torch_kron

Kronecker product
torch_masked_select

Masked_select
torch_logical_not

Logical_not
torch_mvlgamma

Mvlgamma
torch_save

Saves an object to a disk file.
torch_neg

Neg
nnf_grid_sample

Grid_sample
torch_triu

Triu
torch_absolute

Absolute
torch_take

Take
torch_result_type

Result_type
optim_asgd

Averaged Stochastic Gradient Descent optimizer
torch_celu

Celu
torch_angle

Angle
nnf_max_pool3d

Max_pool3d
nnf_rrelu

Rrelu
torch_exp

Exp
torch_cat

Cat
torch_bernoulli

Bernoulli
nnf_hinge_embedding_loss

Hinge_embedding_loss
torch_bitwise_or

Bitwise_or
nnf_instance_norm

Instance_norm
nnf_soft_margin_loss

Soft_margin_loss
torch_acosh

Acosh
nnf_nll_loss

Nll_loss
nnf_normalize

Normalize
nnf_triplet_margin_loss

Triplet_margin_loss
nnf_threshold

Threshold
torch_amin

Amin
torch_rad2deg

Rad2deg
torch_lgamma

Lgamma
torch_addmm

Addmm
torch_pinverse

Pinverse
torch_mv

Mv
torch_sparse_coo_tensor

Sparse_coo_tensor
optim_adam

Implements Adam algorithm.
torch_cummin

Cummin
torch_abs

Abs
torch_asin

Asin
torch_as_strided

As_strided
torch_dequantize

Dequantize
torch_randint_like

Randint_like
torch_combinations

Combinations
torch_arccos

Arccos
torch_logspace

Logspace
torch_addcmul

Addcmul
torch_sqrt

Sqrt
torch_celu_

Celu_
torch_vdot

Vdot
torch_conv2d

Conv2d
torch_bincount

Bincount
torch_cartesian_prod

Cartesian_prod
torch_cummax

Cummax
torch_roll

Roll
torch_square

Square
torch_tril_indices

Tril_indices
torch_scalar_tensor

Scalar tensor
torch_empty_strided

Empty_strided
torch_le

Le
torch_deg2rad

Deg2rad
optim_rprop

Implements the resilient backpropagation algorithm.
torch_triu_indices

Triu_indices
torch_view_as_complex

View_as_complex
torch_randn

Randn
torch_isclose

Isclose
torch_memory_format

Memory format
torch_channel_shuffle

Channel_shuffle
torch_conv3d

Conv3d
torch_can_cast

Can_cast
torch_finfo

Floating point type info
torch_arange

Arange
torch_tril

Tril
torch_bitwise_xor

Bitwise_xor
torch_dstack

Dstack
torch_dtype

Torch data types
torch_lcm

Lcm
torch_ge

Ge
torch_bucketize

Bucketize
torch_lu_solve

Lu_solve
torch_split

Split
torch_det

Det
torch_eig

Eig
torch_acos

Acos
optim_rmsprop

RMSprop optimizer
torch_histc

Histc
torch_diag_embed

Diag_embed
torch_conv_transpose3d

Conv_transpose3d
torch_device

Create a Device object
torch_full

Full
torch_std_mean

Std_mean
torch_threshold_

Threshold_
torch_fix

Fix
torch_less_equal

Less_equal
torch_einsum

Einsum
torch_fft_ifft

Ifft
torch_fft_fft

Fft
torch_clone

Clone
torch_eq

Eq
torch_std

Std
torch_generator

Create a Generator object
torch_topk

Topk
torch_hypot

Hypot
torch_exp2

Exp2
torch_heaviside

Heaviside
torch_is_installed

Verifies if torch is installed
torch_i0

I0
torch_diag

Diag
torch_layout

Creates the corresponding layout
torch_mm

Mm
torch_log

Log
torch_flatten

Flatten
torch_full_like

Full_like
torch_rot90

Rot90
torch_lstsq

Lstsq
torch_logit

Logit
torch_is_nonzero

Is_nonzero
torch_hann_window

Hann_window
torch_flip

Flip
torch_matmul

Matmul
torch_isfinite

Isfinite
torch_kthvalue

Kthvalue
torch_hstack

Hstack
torch_promote_types

Promote_types
torch_logsumexp

Logsumexp
torch_log10

Log10
torch_matrix_exp

Matrix_exp
torch_nansum

Nansum
torch_logical_or

Logical_or
torch_install_path

A simple exported version of install_path Returns the torch installation path.
torch_matrix_power

Matrix_power
torch_polygamma

Polygamma
torch_matrix_rank

Matrix_rank
torch_pixel_shuffle

Pixel_shuffle
torch_inverse

Inverse
torch_sum

Sum
torch_unsqueeze

Unsqueeze
torch_nextafter

Nextafter
torch_mode

Mode
torch_trace

Trace
torch_mean

Mean
torch_multinomial

Multinomial
torch_lu_unpack

Lu_unpack
torch_median

Median
torch_solve

Solve
torch_sin

Sin
torch_ones_like

Ones_like
torch_logical_xor

Logical_xor
torch_multiply

Multiply
torch_nonzero

Nonzero
torch_polar

Polar
torch_zeros_like

Zeros_like
torch_real

Real
torch_nanquantile

Nanquantile
torch_ormqr

Ormqr
torch_randperm

Randperm
torch_meshgrid

Meshgrid
torch_randn_like

Randn_like
torch_searchsorted

Searchsorted
torch_qscheme

Creates the corresponding Scheme object
torch_quantile

Quantile
torch_range

Range
torch_signbit

Signbit
torch_outer

Outer
torch_squeeze

Squeeze
torch_subtract

Subtract
torch_orgqr

Orgqr
torch_qr

Qr
torch_selu

Selu
torch_selu_

Selu_
torch_round

Round
torch_stack

Stack
torch_transpose

Transpose
torch_zeros

Zeros
torch_sort

Sort
torch_stft

Stft
torch_where

Where
torch_tensor

Converts R objects to a torch tensor
torch_rand_like

Rand_like
torch_renorm

Renorm
torch_randint

Randint
torch_sinh

Sinh
torch_sub

Sub
torch_slogdet

Slogdet
torch_true_divide

TRUE_divide
torch_vander

Vander
torch_unbind

Unbind
torch_remainder

Remainder
torch_sgn

Sgn
torch_tensordot

Tensordot
torch_trapz

Trapz
torch_view_as_real

View_as_real
torch_triangular_solve

Triangular_solve
torch_unique_consecutive

Unique_consecutive
with_detect_anomaly

Context-manager that enable anomaly detection for the autograd engine.
torch_vstack

Vstack