Logistic regression is a classification mechanism. Given the binary
data ${y_i}$ and the p-dimensional predictor variables
${x_i}$, one wants to forecast whether a future data point y*
observed at the predictor x* will be zero or one. Logistic
regression stipulates that the statistical model for observing a
success=1 or failure=0 is governed by $$P(y^* = 1 | x^*, \beta) = (1 + \exp(-x* \beta))^{-1}.$$
Instead of representing data as a collection of binary outcomes, one
may record the average response $y_i$ at each unique $x_i$
given a total number of $n_i$ observations at $x_i$. We
follow this method of encoding data.
Polson and Scott suggest placing a Jeffrey's Beta prior
Be(1/2,1/2) on
$$m(\beta) := P(y_0 = 1 | x_0, \beta) = (1 + \exp(-x_0 \beta))^{-1},$$
which generates a Z-distribution prior for $\beta$,
$$p(\beta) = \exp(0.5 x_0 \beta) / (1 + \exp(0.5 x_0 \beta)).$$
One may interpret this as "prior" data where the average response at
$x_0$ is $1/2$ based upon a "single" observation. The
default value of $x_0=mean(x), x={x_i}$.