Learn R Programming

SSDL (version 1.1)

Gradient_D_cpp_parallel: Gradient_D_cpp_parallel

Description

Parallel computation of the gradient with respect to a dictionary matrix and the objective function computation.

Usage

Gradient_D_cpp_parallel(D, A, W, SK, ComputeGrad = TRUE)

Arguments

D

is a dictionary \(s \times K\).

A

is a code matrix \(K \times n\).

W

is a frequency matrix \(m \times s\) with frequency vectors in matrix rows.

SK

is a data sketch. It is a \(2m\)-dimensional vector.

ComputeGrad

indicates whether to compute the gradient or only the objective function value

Value

a list

  • grad is a computed gradient

  • ObjFun is a objective function value

  • diff is a vector of the difference between the data sketch and the decomposition sketch

Details

The objective function is given as \(\|SK - SK(D\cdot A)\|^2\), where \(SK\) is a data sketch, \(A = \{\alpha_1, \dots, \alpha_n\}\) is a code matrix and \(SK(D\cdot A)\) denotes a decomposition sketch, which is defined as: \(SK(D\cdot A) = \frac{1}{n}\left[\sum_{i=1}^n \cos(W\cdot D \cdot \alpha_i), \sum_{i=1}^n \sin(W\cdot D \cdot \alpha_i)\right]\). The gradient of this objective function with respect to a dictionary element \(d_l \in R^{s}\) is given as: \(- 2 \left( \nabla_{d_l} SK(D\cdot A) \right)^{\top} \cdot r\), where \(r = SK - SK(D \cdot A)\), \(\frac{\delta}{\delta d_l} SK^j(D\cdot A) = 1i \cdot \left( \frac{1}{n} \sum_{i = 1}^n A_{li}\cdot \prod_{k=1}^K SK^j(A_{ki}\cdot d_k) \right)\cdot w_j^{\top}\), and \(SK^j(\cdot )\) is the \(j^{th}\) coordinate of the sketch vector.

Examples

Run this code
# NOT RUN {
RcppParallel::setThreadOptions(numThreads = 2)
X = matrix(abs(rnorm(n = 1000)), ncol = 100, nrow = 10)
X_fbm = bigstatsr::as_FBM(X)$save()
W = chickn::GenerateFrequencies(Data = X_fbm, m = 64, N0 = ncol(X_fbm),
                                ncores = 1, niter= 3, nblocks = 2, sigma_start = 0.001)$W
SK= chickn::Sketch(X_fbm, W)
D = X_fbm[, sample(ncol(X_fbm), 10)]
A = sapply(sample(ncol(X_fbm), 5), function(i){
    as.vector(glmnet::glmnet(x = D, y = X_fbm[,i],
              lambda = 0, intercept = FALSE, lower.limits = 0)$beta)})
G = Gradient_D_cpp_parallel(D, A, W, SK)$grad                                    
# }

Run the code above in your browser using DataLab