Learn R Programming

fdp (version 1.0.0)

gdp: Gaussian differential privacy trade-off function

Description

Constructs the trade-off function corresponding to \(\mu\)-Gaussian differential privacy (GDP). This framework, introduced by Dong et al. (2022), provides a natural privacy guarantee for mechanisms based on Gaussian noise, typically offering tighter composition properties and a better privacy-utility trade-off than classical \((\varepsilon, \delta)\)-differential privacy.

Usage

gdp(mu = 1)

Value

A function of class c("fdp_gdp_tradeoff", "function") that computes the \(\mu\)-GDP trade-off function.

When called:

  • Without arguments: Returns a data frame with columns alpha and beta containing points on a canonical grid (alpha = seq(0, 1, by = 0.01)) of the trade-off function.

  • With an alpha argument: Returns a data frame with columns alpha and beta containing the Type-II error values corresponding to the specified Type-I error rates.

Arguments

mu

Numeric scalar specifying the \(\mu\) privacy parameter. Must be non-negative.

Formal definition

Gaussian differential privacy (Dong et al., 2022) arises as the trade-off function corresponding to distinguishing between two Normal distributions with unit variance and means differing by \(\mu\). Without loss of generality, the trade-off function is therefore, $$G_\mu := T\left(N(0, 1), N(\mu, 1)\right) \quad\text{for}\quad \mu \ge 0.$$ This leads to, $$G_\mu(\alpha) = \Phi\left(\Phi^{-1}(1-\alpha)-\mu\right)$$ where \(\Phi\) is the standard Normal cumulative distribution function.

The most natural way to satisfy \(\mu\)-GDP is by adding Gaussian noise to construct the randomised algorithm. Theorem 1 in Dong et al. (2022) identifies the correct variance of that noise for a given sensitivity of the statistic to be released. Let \(\theta(S)\) be the statistic of the data \(S\) which is to be released. Then the Gaussian mechanism is defined to be $$M(S) := \theta(S) + \eta$$ where \(\eta \sim N(0, \Delta(\theta)^2 / \mu^2)\) and, $$\Delta(\theta) := \sup_{S, S'} |\theta(S) - \theta(S')|$$ the supremum being taken over neighbouring data sets. The randomised algorithm \(M(\cdot)\) is then a \(\mu\)-GDP release of \(\theta(S)\).

More generally, any mechanism \(M(\cdot)\) satisfies \(\mu\)-GDP if, $$T\left(M(S), M(S')\right) \ge G_\mu$$ for all neighbouring data sets \(S, S'\). In particular, one can seek the minimal \(\mu\) for a collection of trade-off functions using est_gdp().

Details

Creates a \(\mu\)-Gaussian differential privacy trade-off function for use in f-DP analysis and visualisation. If you would like a reminder of the formal definition of \(\mu\)-GDP, please see further down this documentation page in the "Formal definition" Section.

The function returns a closure that stores the \(\mu\) parameter in its environment. This function can be called with or without argument supplied, either to obtain points on a canonical grid or particular Type-II error rates for given Type-I errors respectively.

References

Dong, J., Roth, A. and Su, W.J. (2022). “Gaussian Differential Privacy”. Journal of the Royal Statistical Society Series B, 84(1), 3–37. tools:::Rd_expr_doi("10.1111/rssb.12454").

See Also

fdp() for plotting trade-off functions, est_gdp() for finding the choice of \(\mu\) that lower bounds a collection of trade-off functions.

Additional trade-off functions can be found in epsdelta() for classical \((\varepsilon, \delta)\)-differential privacy, and lap() for Laplace differential privacy.

Examples

Run this code
# Gaussian DP with mu = 1
gdp_1 <- gdp(1.0)
gdp_1
gdp_1()  # View points on the canonical grid

# Stronger privacy with mu = 0.5
gdp_strong <- gdp(0.5)
gdp_strong

# Evaluate at specific Type-I error rates
gdp_1(c(0.05, 0.1, 0.25, 0.5))

# Plot and compare different mu values
fdp(gdp(0.5),
    gdp(1.0),
    gdp(2.0))

# Compare Gaussian DP with classical (epsilon, delta)-DP
fdp(gdp(1.0),
    epsdelta(1.0),
    epsdelta(1.0, 0.01),
    .legend = "Privacy Mechanism")

Run the code above in your browser using DataLab