Learn R Programming

⚠️There's a newer version (0.14.2) of this package.Take me there.

torch

Installation

Run:

remotes::install_github("mlverse/torch")

At the first package load additional software will be installed.

Example

Currently this package is only a proof of concept and you can only create a Torch Tensor from an R object. And then convert back from a torch Tensor to an R object.

library(torch)
x <- array(runif(8), dim = c(2, 2, 2))
y <- torch_tensor(x, dtype = torch_float64())
y
#> torch_tensor 
#> (1,.,.) = 
#>   0.8687  0.0157
#>   0.4237  0.8971
#> 
#> (2,.,.) = 
#>   0.4021  0.5509
#>   0.3374  0.9034
#> [ CPUDoubleType{2,2,2} ]
identical(x, as_array(y))
#> [1] TRUE

Simple Autograd Example

In the following snippet we let torch, using the autograd feature, calculate the derivatives:

x <- torch_tensor(1, requires_grad = TRUE)
w <- torch_tensor(2, requires_grad = TRUE)
b <- torch_tensor(3, requires_grad = TRUE)
y <- w * x + b
y$backward()
x$grad
#> torch_tensor 
#>  2
#> [ CPUFloatType{1} ]
w$grad
#> torch_tensor 
#>  1
#> [ CPUFloatType{1} ]
b$grad
#> torch_tensor 
#>  1
#> [ CPUFloatType{1} ]

Linear Regression

In the following example we are going to fit a linear regression from scratch using torch’s Autograd.

Note all methods that end with _ (eg. sub_), will modify the tensors in place.

x <- torch_randn(100, 2)
y <- 0.1 + 0.5*x[,1] - 0.7*x[,2]

w <- torch_randn(2, 1, requires_grad = TRUE)
b <- torch_zeros(1, requires_grad = TRUE)

lr <- 0.5
for (i in 1:100) {
  y_hat <- torch_mm(x, w) + b
  loss <- torch_mean((y - y_hat$squeeze(1))^2)
  
  loss$backward()
  
  with_no_grad({
    w$sub_(w$grad*lr)
    b$sub_(b$grad*lr)   
    
    w$grad$zero_()
    b$grad$zero_()
  })
}
print(w)
#> torch_tensor 
#>  0.5000
#> -0.7000
#> [ CPUFloatType{2,1} ]
print(b) 
#> torch_tensor 
#> 0.01 *
#> 10.0000
#> [ CPUFloatType{1} ]

Contributing

No matter your current skills it’s possible to contribute to torch development. See the contributing guide for more information.

Copy Link

Version

Install

install.packages('torch')

Monthly Downloads

7,962

Version

0.0.1

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Daniel Falbel

Last Published

August 6th, 2020

Functions in torch (0.0.1)

cuda_current_device

Returns the index of a currently selected device.
autograd_set_grad_mode

Set grad mode
autograd_grad

Computes and returns the sum of gradients of outputs w.r.t. the inputs.
dataset

An abstract class representing a Dataset.
is_torch_qscheme

Checks if an object is a QScheme
dataloader_next

Get the next element of a dataloader iterator
dataloader_make_iter

Creates an iterator from a DataLoader
is_torch_memory_format

Check if an object is a memory format
nn_identity

Identity module
nn_init_calculate_gain

Calculate gain
nn_init_sparse_

Sparse initialization
nn_conv_transpose2d

ConvTranpose2D module
nn_conv1d

Conv1D module
nn_conv2d

Conv2D module
torch_set_default_dtype

Gets and sets the default floating point dtype.
nn_init_trunc_normal_

Truncated normal initialization
autograd_function

Records operation history and defines formulas for differentiating ops.
nn_bce_loss

Binary cross entropy loss
nn_batch_norm2d

BatchNorm2D
autograd_backward

Computes the sum of gradients of given tensors w.r.t. graph leaves.
is_torch_dtype

Check if object is a torch data type
is_torch_layout

Check if an object is a torch layout.
nn_adaptive_log_softmax_with_loss

AdaptiveLogSoftmaxWithLoss module
nn_conv3d

Conv3D module
AutogradContext

Class representing the context.
as_array

Converts to array
install_torch

Install Torch
is_dataloader

Checks if the object is a dataloader
nn_init_constant_

Constant initialization
nn_conv_transpose1d

ConvTranspose1D
nn_dropout3d

Dropout3D module
nn_batch_norm1d

BatchNorm1D module
nn_dropout2d

Dropout2D module
nn_init_orthogonal_

Orthogonal initialization
nn_init_ones_

Ones initialization
nn_init_kaiming_uniform_

Kaiming uniform initialization
nn_init_dirac_

Dirac initialization
nn_init_normal_

Normal initialization
nn_log_sigmoid

LogSigmoid module
nn_module

Base class for all neural network modules.
nn_log_softmax

LogSoftmax module
nn_sigmoid

Sigmoid module
nn_tanhshrink

Tanhshrink module
nn_module_list

Holds submodules in a list.
nn_threshold

Threshoold module
nn_softmax

Softmax module
nn_softsign

Softsign module
nn_max_pool1d

MaxPool1D module
nn_tanh

Tanh module
nn_max_pool2d

MaxPool2D module
nn_cross_entropy_loss

CrossEntropyLoss module
nn_selu

SELU module
nnf_adaptive_avg_pool1d

Adaptive_avg_pool1d
nn_sequential

A sequential container
nnf_adaptive_avg_pool2d

Adaptive_avg_pool2d
nnf_conv2d

Conv2d
nnf_conv3d

Conv3d
nnf_conv1d

Conv1d
nn_dropout

Dropout module
nnf_conv_tbc

Conv_tbc
nn_utils_rnn_pad_packed_sequence

Pads a packed batch of variable length sequences.
nn_hardswish

Hardswish module
nn_rnn

RNN module
nn_leaky_relu

LeakyReLU module
nn_hardtanh

Hardtanh module
nn_linear

Linear module
nn_conv_transpose3d

ConvTranpose3D module
nn_gelu

GELU module
nn_rrelu

RReLU module
nn_utils_rnn_pack_padded_sequence

Packs a Tensor containing padded sequences of variable length.
nn_utils_rnn_pack_sequence

Packs a list of variable length Tensors
nnf_adaptive_max_pool2d

Adaptive_max_pool2d
nn_glu

GLU module
nnf_gelu

Gelu
nnf_cosine_embedding_loss

Cosine_embedding_loss
nnf_conv_transpose3d

Conv_transpose3d
nnf_batch_norm

Batch_norm
nnf_avg_pool3d

Avg_pool3d
nnf_adaptive_max_pool3d

Adaptive_max_pool3d
nnf_elu

Elu
nnf_embedding

Embedding
nnf_hardtanh

Hardtanh
nnf_ctc_loss

Ctc_loss
nnf_grid_sample

Grid_sample
nnf_glu

Glu
nn_init_kaiming_normal_

Kaiming normal initialization
nn_init_eye_

Eye initialization
nn_multihead_attention

MultiHead attention
nnf_dropout

Dropout
nnf_dropout2d

Dropout2d
nnf_group_norm

Group_norm
nnf_linear

Linear
nn_utils_rnn_pad_sequence

Pad a list of variable length Tensors with padding_value
nn_prelu

PReLU module
nnf_dropout3d

Dropout3d
nnf_bilinear

Bilinear
nn_softplus

Softplus module
nnf_adaptive_avg_pool3d

Adaptive_avg_pool3d
nn_softshrink

Softshrink module
nnf_gumbel_softmax

Gumbel_softmax
nnf_kl_div

Kl_div
nnf_hardshrink

Hardshrink
nnf_l1_loss

L1_loss
nnf_conv_transpose2d

Conv_transpose2d
nnf_binary_cross_entropy

Binary_cross_entropy
nnf_conv_transpose1d

Conv_transpose1d
nnf_adaptive_max_pool1d

Adaptive_max_pool1d
nnf_lp_pool1d

Lp_pool1d
nnf_hinge_embedding_loss

Hinge_embedding_loss
nnf_embedding_bag

Embedding_bag
nnf_avg_pool2d

Avg_pool2d
nnf_avg_pool1d

Avg_pool1d
nnf_max_pool3d

Max_pool3d
nnf_local_response_norm

Local_response_norm
nnf_pairwise_distance

Pairwise_distance
nnf_max_pool2d

Max_pool2d
nnf_fractional_max_pool2d

Fractional_max_pool2d
nnf_fractional_max_pool3d

Fractional_max_pool3d
nnf_fold

Fold
nnf_max_unpool1d

Max_unpool1d
nnf_lp_pool2d

Lp_pool2d
nnf_max_unpool2d

Max_unpool2d
cuda_is_available

Returns a bool indicating if CUDA is currently available.
dataloader

Data loader. Combines a dataset and a sampler, and provides single- or multi-process iterators over the dataset.
nnf_layer_norm

Layer_norm
nnf_margin_ranking_loss

Margin_ranking_loss
nnf_leaky_relu

Leaky_relu
nnf_instance_norm

Instance_norm
nnf_interpolate

Interpolate
nnf_log_softmax

Log_softmax
nnf_multilabel_margin_loss

Multilabel_margin_loss
nnf_multi_head_attention_forward

Multi head attention forward
nnf_multi_margin_loss

Multi_margin_loss
nnf_logsigmoid

Logsigmoid
enumerate

Enumerate an iterator
nnf_prelu

Prelu
nnf_nll_loss

Nll_loss
nnf_max_pool1d

Max_pool1d
nnf_normalize

Normalize
enumerate.dataloader

Enumerate an iterator
nn_bilinear

Bilinear module
nnf_pdist

Pdist
nnf_softplus

Softplus
nnf_softmin

Softmin
nnf_tanhshrink

Tanhshrink
nn_elu

ELU module
nn_celu

CELU module
nn_embedding

Embedding module
nn_hardshrink

Hardshwink module
nnf_softmax

Softmax
nnf_pad

Pad
nnf_one_hot

One_hot
nnf_soft_margin_loss

Soft_margin_loss
optim_sgd

SGD optimizer
nn_hardsigmoid

Hardsigmoid module
nnf_threshold

Threshold
tensor_dataset

Dataset wrapping tensors.
torch_addcdiv

Addcdiv
torch_addbmm

Addbmm
nnf_multilabel_soft_margin_loss

Multilabel_soft_margin_loss
torch_addr

Addr
torch_allclose

Allclose
torch_angle

Angle
torch_addmv

Addmv
torch_bernoulli

Bernoulli
torch_bartlett_window

Bartlett_window
torch_can_cast

Can_cast
nnf_relu

Relu
optim_adam

Implements Adam algorithm.
nn_init_uniform_

Uniform initialization
torch_adaptive_avg_pool1d

Adaptive_avg_pool1d
torch_abs

Abs
optim_required

Dummy value indicating a required value.
nnf_selu

Selu
nnf_smooth_l1_loss

Smooth_l1_loss
torch_bitwise_not

Bitwise_not
torch_as_strided

As_strided
nnf_softsign

Softsign
torch_acos

Acos
torch_bincount

Bincount
torch_bitwise_and

Bitwise_and
nn_softmax2d

Softmax2d module
nn_relu

ReLU module
nn_init_xavier_uniform_

Xavier uniform initialization
nn_init_zeros_

Zeros initialization
nn_init_xavier_normal_

Xavier normal initialization
nn_relu6

ReLu6 module
torch_add

Add
nn_softmin

Softmin
torch_arange

Arange
torch_avg_pool1d

Avg_pool1d
torch_argmax

Argmax
torch_baddbmm

Baddbmm
torch_bitwise_or

Bitwise_or
torch_cummin

Cummin
torch_cartesian_prod

Cartesian_prod
torch_broadcast_tensors

Broadcast_tensors
torch_conv3d

Conv3d
torch_atan

Atan
torch_cdist

Cdist
torch_cat

Cat
torch_atan2

Atan2
torch_chunk

Chunk
torch_clamp

Clamp
torch_cholesky

Cholesky
torch_chain_matmul

Chain_matmul
torch_cumprod

Cumprod
nnf_affine_grid

Affine_grid
torch_cummax

Cummax
torch_cross

Cross
nnf_alpha_dropout

Alpha_dropout
torch_diagonal

Diagonal
torch_conv_tbc

Conv_tbc
torch_conv_transpose3d

Conv_transpose3d
torch_combinations

Combinations
nnf_celu

Celu
nnf_cosine_similarity

Cosine_similarity
torch_conv1d

Conv1d
torch_conv2d

Conv2d
torch_dist

Dist
torch_digamma

Digamma
torch_conj

Conj
torch_eig

Eig
torch_cos

Cos
nnf_cross_entropy

Cross_entropy
torch_cosh

Cosh
nnf_hardsigmoid

Hardsigmoid
torch_conv_transpose1d

Conv_transpose1d
torch_div

Div
torch_cosine_similarity

Cosine_similarity
torch_cumsum

Cumsum
torch_det

Det
torch_device

Create a Device object
torch_empty

Empty
torch_conv_transpose2d

Conv_transpose2d
torch_equal

Equal
torch_asin

Asin
nnf_softshrink

Softshrink
nnf_binary_cross_entropy_with_logits

Binary_cross_entropy_with_logits
torch_bmm

Bmm
nnf_hardswish

Hardswish
torch_diag

Diag
nnf_max_unpool3d

Max_unpool3d
torch_exp

Exp
torch_erf

Erf
torch_expm1

Expm1
torch_empty_like

Empty_like
torch_eye

Eye
torch_diagflat

Diagflat
torch_diag_embed

Diag_embed
torch_dot

Dot
torch_gather

Gather
torch_einsum

Einsum
torch_ger

Ger
torch_hamming_window

Hamming_window
torch_fft

Fft
torch_isinf

Isinf
torch_hann_window

Hann_window
torch_full

Full
torch_gt

Gt
torch_floor

Floor
torch_floor_divide

Floor_divide
nnf_mse_loss

Mse_loss
torch_isnan

Isnan
torch_fmod

Fmod
torch_matrix_rank

Matrix_rank
torch_logical_not

Logical_not
torch_matrix_power

Matrix_power
torch_mv

Mv
torch_logical_and

Logical_and
torch_randn_like

Randn_like
torch_mvlgamma

Mvlgamma
torch_randn

Randn
torch_frac

Frac
torch_dtype

Torch data types
torch_le

Le
torch_real

Real
torch_erfc

Erfc
torch_reciprocal

Reciprocal
torch_is_complex

Is_complex
torch_full_like

Full_like
torch_imag

Imag
torch_ge

Ge
nnf_pixel_shuffle

Pixel_shuffle
torch_log2

Log2
torch_lerp

Lerp
torch_logdet

Logdet
torch_lt

Lt
torch_lstsq

Lstsq
torch_lgamma

Lgamma
torch_index_select

Index_select
torch_median

Median
torch_memory_format

Memory format
torch_sparse_coo_tensor

Sparse_coo_tensor
torch_std

Std
torch_split

Split
torch_is_floating_point

Is_floating_point
torch_std_mean

Std_mean
torch_neg

Neg
nnf_poisson_nll_loss

Poisson_nll_loss
torch_kthvalue

Kthvalue
nnf_rrelu

Rrelu
nnf_relu6

Relu6
torch_nonzero

Nonzero
nnf_triplet_margin_loss

Triplet_margin_loss
torch_layout

Creates the corresponding layout
torch_erfinv

Erfinv
torch_inverse

Inverse
torch_generator

Create a Generator object
torch_geqrf

Geqrf
nnf_unfold

Unfold
torch_addcmul

Addcmul
torch_addmm

Addmm
torch_ormqr

Ormqr
torch_orgqr

Orgqr
torch_argmin

Argmin
torch_rand

Rand
torch_triangular_solve

Triangular_solve
torch_trapz

Trapz
torch_var_mean

Var_mean
torch_log10

Log10
torch_isfinite

Isfinite
torch_irfft

Irfft
torch_is_installed

Verifies if torch is installed
torch_repeat_interleave

Repeat_interleave
torch_rand_like

Rand_like
torch_reshape

Reshape
torch_log1p

Log1p
torch_logical_xor

Logical_xor
torch_logical_or

Logical_or
torch_argsort

Argsort
torch_mul

Mul
torch_normal

Normal
torch_lu_solve

Lu_solve
torch_norm

Norm
torch_lu

LU
torch_bitwise_xor

Bitwise_xor
torch_multinomial

Multinomial
torch_prod

Prod
torch_linspace

Linspace
torch_masked_select

Masked_select
torch_logspace

Logspace
torch_selu_

Selu_
torch_sigmoid

Sigmoid
torch_solve

Solve
torch_max

Max
torch_matmul

Matmul
torch_blackman_window

Blackman_window
torch_celu_

Celu_
torch_ceil

Ceil
torch_cholesky_inverse

Cholesky_inverse
torch_logsumexp

Logsumexp
torch_mode

Mode
torch_mm

Mm
torch_cholesky_solve

Cholesky_solve
torch_where

Where
torch_flatten

Flatten
torch_eq

Eq
torch_empty_strided

Empty_strided
torch_promote_types

Promote_types
torch_pdist

Pdist
torch_ifft

Ifft
torch_flip

Flip
torch_histc

Histc
torch_pinverse

Pinverse
torch_qr

Qr
torch_quantize_per_channel

Quantize_per_channel
torch_log

Log
torch_load

Loads a saved object
torch_mean

Mean
torch_ones

Ones
torch_qscheme

Creates the corresponding Scheme object
torch_reduction

Creates the reduction objet
torch_quantize_per_tensor

Quantize_per_tensor
torch_pixel_shuffle

Pixel_shuffle
torch_ones_like

Ones_like
torch_randperm

Randperm
torch_range

Range
torch_min

Min
torch_meshgrid

Meshgrid
torch_narrow

Narrow
torch_roll

Roll
torch_relu_

Relu_
torch_ne

Ne
torch_polygamma

Polygamma
torch_sinh

Sinh
torch_rot90

Rot90
torch_randint

Randint
torch_poisson

Poisson
torch_randint_like

Randint_like
torch_rsqrt

Rsqrt
torch_save

Saves an object to a disk file.
torch_remainder

Remainder
torch_pow

Pow
torch_slogdet

Slogdet
torch_renorm

Renorm
torch_t

T
torch_threshold_

Threshold_
torch_sort

Sort
torch_topk

Topk
torch_take

Take
torch_sum

Sum
torch_stft

Stft
torch_trunc

Trunc
torch_true_divide

True_divide
torch_result_type

Result_type
torch_trace

Trace
torch_triu

Triu
torch_transpose

Transpose
torch_triu_indices

Triu_indices
torch_round

Round
torch_rrelu_

Rrelu_
torch_sqrt

Sqrt
torch_squeeze

Squeeze
torch_rfft

Rfft
torch_zeros

Zeros
torch_unbind

Unbind
torch_zeros_like

Zeros_like
torch_unique_consecutive

Unique_consecutive
torch_square

Square
with_enable_grad

Enable grad
torch_stack

Stack
torch_sin

Sin
torch_sign

Sign
torch_tensor

Converts R objects to a torch tensor
with_no_grad

Temporarily modify gradient recording.
torch_tril

Tril
torch_tensordot

Tensordot
torch_tril_indices

Tril_indices
torch_svd

Svd
torch_tan

Tan
torch_symeig

Symeig
torch_unsqueeze

Unsqueeze
torch_tanh

Tanh
torch_var

Var
cuda_device_count

Returns the number of GPUs available.