Weights Based on the Jensen-Shannon Divergence
weights_jsd(design, ...)# S4 method for OneStageBasket
weights_jsd(
design,
n,
lambda,
epsilon = 1.25,
tau = 0.5,
logbase = 2,
prune = FALSE,
globalweight_fun = NULL,
globalweight_params = list(),
...
)
# S4 method for TwoStageBasket
weights_jsd(design, n, n1, epsilon = 1.25, tau = 0, logbase = 2, ...)
A matrix including the weights of all possible pairwise outcomes.
An object of class Basket created by
setupOneStageBasket or setupTwoStageBasket.
Further arguments.
The sample size per basket.
The posterior probability threshold. See details for more information.
A tuning parameter that determines the amount of borrowing. See details for more information.
A tuning parameter that determines how similar the baskets have to be that borrowing occurs. See details for more information.
A tuning parameter that determines which logarithm base is used to compute the Jensen-Shannon divergence. See details for more information.
Whether baskets with a number of responses below the
critical pooled value should be pruned before the final analysis.
If this is TRUE then lambda is also required and
if globalweight_fun is not NULL then
globalweight_fun and globalweight_params are also used.
Which function should be used to calculate the global weights.
A list of tuning parameters specific to
globalweight_fun.
The sample size per basket for the interim analysis in case of a two-stage design.
weights_jsd(OneStageBasket): Jensen-Shannon Divergence weights for a
single-stage basket design.
weights_jsd(TwoStageBasket): Jensen-Shannon Divergence weights for a two-stage
basket design.
weights_jsd calculates the weights used for sharing
information between baskets based on the Jensen-Shannon divergence (JSD).
The weight for two baskets i and j is found as
\((1 - JSD(i, j))^\varepsilon\) where \(JSD(i, j)\) is the Jensen-Shannon
divergence between the individual posterior distributions of the response
probabilities of basket i and j. This is identical to how the weights are
calculated in weights_fujikawa, however when Fujikawa's weights
are used the prior information is also shared.
A small value of epsilon results in stronger borrowing also across baskets with heterogenous results. If epsilon is large then information is only borrowed between baskets with similar results. If a weight is smaller than tau it is set to 0, which results in no borrowing.
If prune = TRUE then the baskets with an observed number of baskets
smaller than the pooled critical value are not borrowed from. The
pooled critical value is the smallest integer c for which all null
hypotheses can be rejected if the number of responses is exactly c for
all baskets.
The function is generally not called by the user but passed to another
function such as toer and pow to specificy
how the weights are calculated.
design <- setupOneStageBasket(k = 3, p0 = 0.2)
toer(design, n = 15, lambda = 0.99, weight_fun = weights_jsd)
Run the code above in your browser using DataLab