Nice, you are all set 🚀 Say hello on Twitter!

Something went wrong 🤔 Ping me on Twitter.

June 5, 2020

Differentially private federated GANs in the wild

This post is a gentle survey of differentially private generative adversarial models in the federated context, inspired by the work of (Augenstein et al 2019), accepted at ICLR 2020.

Billions of mobile phones & edge devices generate data each day: the global Internet traffic will reach more than 150 TB per second by 2022.

Source: UNCTAD Digital Economy Report 2019
Source: UNCTAD Digital Economy Report 2019

This decentralized data presents an opportunity to improve user experience: make more informed decisions on the product delivery and notify about anomalies in usage patterns. But how can we enhance utility for the end user and avoid compromising their privacy and security?

Traditionally, systems of advanced analytics are implemented using a centralized architecture: both machine learning models and data are colocated in one place. Yet latency, patchy network coverage, bandwidth limitations, poor battery life, and large volumes or high sensitivity of the data all make the implementation of a centralized modus operandi problematic.

One solution is to train models on-device, with each client existing in their own autonomous bubble. Nothing leaves the device, which solves all our concerns at once. This tactic, however, falls short of satisfying our goal of enhancing user experience: there is just not enough data that a single device can reasonably crunch in order to train a good model.

Another option is to pre-train an initial model on the server using proxy data, push the weights and fine-tune the model on-device. Unfortunately, good proxy samples are hard to find, and once the model is deployed, clients can rarely learn new patterns and find their way out of local minima. For example, consider a smart keyboard app autocompleting sentences. Whenever a new word or abbreviation becomes popular, the signal is too weak for each client to pick up on their own.

In an ideal situation we would like to have a system where clients contribute to the common good and gain benefits in return. That's the essence of federated learning: a central server coordinates training of a shared global model, receiving updates from independent clients and disseminating improved parameters back to the devices. Instead of contributing data, clients contribute locally-trained models. But what are the privacy properties of such a scheme?

Differential privacy levels the risk

To understand the crux of the differential privacy theory, suppose that there is a database of medical records, and one of them is yours. We want to use the symptoms, test results and other records from this database to build a predictive model of COVID-19, but without compromising the privacy of the participants. Let's ask a counterfactual question, comparing two alternatives: one is when we build this model on the original database that includes your samples, and the other one is where we do the same exercise but with just your medical records removed.

Differential privacy says that any harm that might come to you from the analysis is nearly identical in both cases. So differential privacy does not preclude all the negative consequences that might come from data analysis: these bad outcomes were to happen anyway, even if your data was included.

For example, if you often skip physical activity during the day and a study comes out associating sedentary lifestyle with cardiovascular disease, then no matter whether you have participated in the study, you might experience negative consequences like rising premiums from insurance companies.

Differential privacy provides a well-tested formalization of how the information leaks from the private data, allowing a public release of model parameters with a strong guarantee: adversaries are severely limited in what they can learn about the original training data based on analyzing the parameters, even when they have access to arbitrary side information.

From API hardening (e.g limiting the number of requests to avoid query-based attacks), data sanitization, model choice (Bayesian vs decision trees), fit control, regulation and differential privacy, only the latter provides provable guarantees. The primary mechanism of achieving differential privacy is by perturbing either one of the following:

How generative networks aid differentially private federated learning

To develop a machine learning model, engineers, analysts and researchers have to build intuition about the data, which is often achieved by manual inspection of raw representative samples, outliers and failing examples. Nevertheless, even if manual data inspection was an option in the federated context, it opens a broad leeway for violating user privacy.

Instead of direct examination, modelers can rely on synthetic data, as was first shown by (Augenstein et al 2019): they showed how developer needs can be met by generating synthetic examples from a privacy-preserving federated generative adversarial network.

GANs received much attention in the recent years due to numerous recent successes, especially in generating realistic images and audio.

They work by alternately training two networks. One is a generator which maps a random input vector in a low-dimensional latent space into a rich, high-dimensional generative output like an image. The other is a discriminator, which judges whether an input image is ‘real’ (originating from a dataset of actual images) or ‘fake’ (created by the generator). Each network tries to defeat the other; the generator’s training objective is to create content that the discriminator is unable to discern from real data, and the discriminator’s training objective is to improve its ability to differentiate between real and generated content.

In contrast to discriminative models, generative models learn a joint distribution \(p(x, y)\) which can be applied to data synthesis. Such neural networks can either approximate a likelihood function or serve as a mechanism for drawing samples.

Adversarial curses

Unfortunately, GANs are notorious for being hard to train and suffering from two major issues: mode collapse and vanishing gradients.

Mode collapse

Most often GANs are designed to produce a wide variety of outputs: a different face, for example, for every random input to your face generator.

Sometimes a generator stumbles upon on a very plausible output and learns to produce only that output. This is expected, since the generator's objective is to find the one output that seems most plausible to the discriminator.In this case the discriminator's best strategy is to learn to always reject that output. However, if in the next step the discriminator gets stuck in a local minimum, then in the next iteration the generator can easily find a new most plausible output for the current discriminator.

As a result the generators rotate through a small set of possible outputs, while the discriminator jumps from one local minimum to the next.

To alleviate mode collapse, (Augenstein et al 2019) use the Wasserstein loss, which, provided the discriminator doesn't fall into local minima, it learns to reject the outputs that the generator clings to and forces the generator to try something new.

Vanishing gradients

As the generator improves with training, the discriminator performs worse due to a more fleeting difference between real and fake content. If the generator's performance is perfect, then the discriminator has nothing else to do but to flip a coin to make a prediction.

Thus, the discriminator feedback gets less meaningful over time. If the GAN continues to train past the point when the discriminator's output is random, then the generator trains on junk feedback, and its own quality may collapse.

Differential privacy is a powerful tool for preventing the model from falling into this trap due to the mechanism of gradient clipping.

Tentative application of differentially private federated GANs: triplet loss

Let's consider GANs applied to training differentially private federated embeddings using triplet loss, first introduced in FaceNet for learning to generate faces.

Triplet loss is a loss function where an input (an anchor) is compared to a positive input and a negative input. For example, a sentence embedding might be an anchor, the embedding of its paraphrased version a positive input, and the embedding of an unrelated sentence a negative. The distance from the anchor to the positive input is minimized, and the distance to the negative input is maximized. Triplet loss is often used for measuring semantic similarity between samples using embeddings in cases when the number of classes is huge and more traditional approaches of classification are not applicable.

Computing the triplet loss on a user’s device requires access to negative examples of data from other users, which is not directly possible in the federated context. Training differentially private federated generative models offers a solution to this problem, since such a model can serve as an engine providing synthetic negative examples for the triplet loss calculation.

For federated generative models to be useful and broadly applicable, they should require minimal tuning; excessive experimentation needed to achieve convergence destroys the privacy guarantees. They must be capable of synthesizing relevant content in scenarios where the 'signal-to-noise' ratio is low. The user data is very heterogenous, and a bit of junky data should not affect the results too much.

Let's test how robust the differentially private federated GAN is to the choice of parameters.

Experiments: ablation for differentially private federated GAN

We conduct experiments using the TensorFlow implementation of the federated GAN model by (Augenstein et al 2019).

Two sets of model parameters are updated, by alternating minimization of two loss functions.

The key insight noticed by the authors is that only the discriminator’s training step involves the use of real user data (private and restricted to the user’s device); the generator training step does not require real user data, and thus can be computed at the coordinating server via a traditional (non-federated) gradient update.

The second insight is that the generator’s loss is a function of the discriminator. As observed by earlier works involving differentially private GANs, if the discriminator is trained under differential and the generator is trained only via the discriminator, then the generator has the same level of privacy as the discriminator. No additional computational steps such as clipping or noising are necessary at the application of the generator gradient update.

The DP-FedAvg-GAN algorithm for the generative setting, describes how to train a federated GAN with differential privacy guarantees. The discriminator update resembles closely the update in standard DP-FedAvg, with each round concluding in a a generator update at the server. The discriminator is explicitly trained under differential privacy, and since the generator is only exposed to the output of the discriminator, it shares the same differential privacy guarantees. Source: (Augentstein et al 2019)
The DP-FedAvg-GAN algorithm for the generative setting, describes how to train a federated GAN with differential privacy guarantees. The discriminator update resembles closely the update in standard DP-FedAvg, with each round concluding in a a generator update at the server. The discriminator is explicitly trained under differential privacy, and since the generator is only exposed to the output of the discriminator, it shares the same differential privacy guarantees. Source: (Augentstein et al 2019)

The experiment is a random search on two parameters:

dp_l2_norm_clip: the amount the discriminator weights delta vectors from the clients are clipped.

The lower the clipping, the more private the model.

dp_noise_multiplier: Gaussian noise will be added to the sum of the (clipped) discriminator weight deltas from the clients. The standard deviation of this noise is the product of this dp_noise_multiplier and the clip parameter (dp_l2_norm_clip). Note that this noise is added before dividing by a denominator. (The denominator is the number of clients per round)

The greater the noise, the more private the model.

Ablation results: each square is a 6x6 grid of images generated from normal noise after training on Google Colab for around 30 minutes per 50 rounds. The bottom-left corner represents the parameter range yielding the most stable results.
Ablation results: each square is a 6x6 grid of images generated from normal noise after training on Google Colab for around 30 minutes per 50 rounds. The bottom-left corner represents the parameter range yielding the most stable results.

The source code is available via https://github.com/sdll/federated-triplet-loss.

Examples of successful convergence after 1000 rounds

Sample:

Log:

Sample:

Log:

Sample

Log

Examples of unsuccessful convergence after 1000 rounds

Sample

Log

Sample

The model diverged early in the process:

Log

Conclusions

Even though the authors conducted extensive experiments, they left the challenge of secure and private sharing of the data between the clients for the purpose of local fine-tuning untreated, leaving the question of model stability with respect to the noise parameters for further research.

The experiments above show that differentially private federated GAN is hypersensitive to the choice of the gradient clipping and noise scale parameters. Moreover, as (Augenstein et al 2019) show, the model provides sensible privacy guarantees only if the modeler has a user base of more than 1M users. Even if a modeler succeeds in training a conditional version of the model to convergence on a smaller dataset, most likely they would have to conduct expensive search on the optimal parameters for the model in production.

From the business point of view, building solutions with differentially private federated generative adversarial models under the hood is only relevant for big tech, not startups.