Gaussian differential privacy (Dong et al., 2022) arises as the trade-off function corresponding to distinguishing between two Normal distributions with unit variance and means differing by \(\mu\).
Without loss of generality, the trade-off function is therefore,
$$G_\mu := T\left(N(0, 1), N(\mu, 1)\right) \quad\text{for}\quad \mu \ge 0.$$
This leads to,
$$G_\mu(\alpha) = \Phi\left(\Phi^{-1}(1-\alpha)-\mu\right)$$
where \(\Phi\) is the standard Normal cumulative distribution function.
The most natural way to satisfy \(\mu\)-GDP is by adding Gaussian noise to construct the randomised algorithm.
Theorem 1 in Dong et al. (2022) identifies the correct variance of that noise for a given sensitivity of the statistic to be released.
Let \(\theta(S)\) be the statistic of the data \(S\) which is to be released. Then the Gaussian mechanism is defined to be
$$M(S) := \theta(S) + \eta$$
where \(\eta \sim N(0, \Delta(\theta)^2 / \mu^2)\) and,
$$\Delta(\theta) := \sup_{S, S'} |\theta(S) - \theta(S')|$$
the supremum being taken over neighbouring data sets.
The randomised algorithm \(M(\cdot)\) is then a \(\mu\)-GDP release of \(\theta(S)\).
More generally, any mechanism \(M(\cdot)\) satisfies \(\mu\)-GDP if,
$$T\left(M(S), M(S')\right) \ge G_\mu$$
for all neighbouring data sets \(S, S'\).
In particular, one can seek the minimal \(\mu\) for a collection of trade-off functions using est_gdp().