EntropyProg(p, A = NULL, b = NULL, Aeq, beq, verbose = FALSE)
' $$\tilde{p} \equiv argmin_{Fx \leq f, Hx \equiv h} \big{ \sum_1^J x_{j} \big(ln \big( x_{j} \big) - ln \big( p_{j} \big) \big) \big} \ \ell \big(x, \lambda, \nu \big) \equiv x' \b
p_
:optimizationPerformance
:We use Kullback-Leibler divergence or relative entropy dist(p,q): Sum across all scenarios [ p-t * ln( p-t / q-t ) ] Therefore we define solution as p* = argmin (choice of p ) [ sum across all scenarios: p-t * ln( p-t / q-t) ], such that 'p' satisfies views. The views modify the prior in a cohrent manner (minimizing distortion) We forumulate the stress tests of the baseline scenarios as linear constraints on yet-to-be defined probabilities Note that the numerical optimization acts on a very limited number of variables equal to the number of views. It does not act directly on the very large number of variables of interest, namely the probabilities of the Monte Carlo scenarios. This feature guarantees the numerical feasability of entropy optimization.
Note that new probabilities are generated in much the same way that the state-price density modifies objective probabilities of pay-offs to risk-neutral probabilities in contingent-claims asset pricing
Compute posterior (=change of measure) with Entropy Pooling, as described in