Client-private secure aggregation for privacy preserving federated learning
Privacy-preserving federated learning (PPFL) is a paradigm of distributed privacy-preserving machine learning training in which a set of clients, each holding siloed training data, jointly compute a shared global model under the orchestration of an aggregation server. The system has the property that no party learns any information about any client’s training data, besides what could be inferred from the global model. The core cryptographic component of a PPFL scheme is the secure aggregation protocol, a secure multi-party computation protocol in which the server securely aggregates the clients’ locally trained models into an aggregated global model, which it distributes to the clients. However, in many applications the global model represents a trade secret of the consortium of clients, which they may not wish to reveal in the clear to the server. In this work, we propose a novel model of secure aggregation, called client-private secure aggregation (CPSA), in which the server computes an encrypted global model which only the clients can decrypt. We provide three explicit constructions of CPSA which exhibit varying trade-offs. We also conduct experimental results to demonstrate the practicality of our constructions in the cross-silo setting when scaled to 250 clients.