According to equation (27) in bryzgalova2023bayesian;textualBayesianFactorZoo, we learn that
$$\frac{E_{\pi} [ SR^2_f \mid \gamma, \sigma^2 ] }{E_{\pi} [ SR^2_{\alpha} \mid \sigma^2] } = \frac{\psi \sum^K_{k=1} r(\gamma_k) \tilde{\rho}^\top_k \tilde{\rho}_k }{N}, $$
where \(SR^2_f\) and \(SR^2_{\alpha}\) denote the Sharpe ratios of all factors (\(f_t\)) and of the pricing errors
(\(\alpha\)), and \(E_{\pi}\) denotes prior expectations.
The prior \(\pi (\omega)\) encodes the belief about the sparsity of the true model using the prior distribution
\(\pi (\gamma_j = 1 | \omega_j) = \omega_j, \ \ \omega_j \sim Beta(a_\omega, b_\omega) .\) We further integrate out
\(\gamma_j\) in \(E_{\pi} [ SR^2_f \mid \gamma, \sigma^2 ]\) and show the following:
$$\frac{E_{\pi} [ SR^2_f \mid \sigma^2 ] }{E_{\pi} [ SR^2_{\alpha} \mid \sigma^2 ] } \approx \frac{a_\omega}{a_\omega+b_\omega} \psi \frac{ \sum^K_{k=1} \tilde{\rho}^\top_k \tilde{\rho}_k }{N}, \ as \ r \to 0 .$$
Since we can decompose the Sharpe ratios of all test assets, \(SR^2_R\), into \(SR^2_f\) and \(SR^2_{\alpha}\) (i.e., \(SR^2_R = SR^2_f + SR^2_{\alpha}\)), we can
represent \(SR^2_f\) as follows:
$$ E_{\pi} [ SR^2_f \mid \sigma^2 ] \approx \frac{\frac{a_\omega}{a_\omega+b_\omega} \psi \frac{ \sum^K_{k=1} \tilde{\rho}^\top_k \tilde{\rho}_k }{N}}{1 + \frac{a_\omega}{a_\omega+b_\omega} \psi \frac{ \sum^K_{k=1} \tilde{\rho}^\top_k \tilde{\rho}_k }{N}} SR^2_R.$$
We define the prior Sharpe ratio implied by the factor models as \(\sqrt{E_{\pi} [ SR^2_f \mid \sigma^2 ]}\).
Given \(a_\omega\), \(b_\omega\), \(\frac{ \sum^K_{k=1} \tilde{\rho}^\top_k \tilde{\rho}_k }{N}\), and the observed
Sharpe ratio of test assets, we have one-to-one mapping between \(\psi\) and \(\sqrt{E_{\pi} [ SR^2_f \mid \sigma^2 ]}\).
If the user aims to convert \(\psi\) to the prior Sharpe ratio, she should input only psi0.
In contrast, if she wants to convert the prior Sharpe ratio to \(\psi\), priorSR should be entered.