PPNNP: A privacy-preserving neural network prediction with separated data providers using multi-client inner-product encryption |
| |
Affiliation: | 1. Institute of Medical Information Security, Xuzhou Medical University, Xuzhou 221000, China;2. School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China |
| |
Abstract: | In a neural network of deep learning, it needs a series of algorithms that endeavor to recognize underlying relationships in a set of data. In order to protect the privacy of user’s datasets, traditional schemes can perform the prediction task by setting only a single data provider in the system. However, the data may come from multiple separated data providers rather than single data source in real world since each data provider might hold partial features of a complete prediction sample. It requires that multiple data providers cooperate to perform the prediction for the neural networks by sending their own local data to a well-trained prediction model deployed on a remote cloud server to obtain a predictive label. However, the data owned by multiple data providers usually contain a large amount of private information, which can lead to serious security problems once leaked. To resolve the security and privacy issues of the data owned by multiple data providers, in this paper, we propose a Privacy-Preserving Neural Network Prediction model (PPNNP) that deploys multi-client inner-product functional encryption to the first layer of prediction model. Multiple data providers encrypt their data and upload it to a well-trained model deployed on cloud server, and the server makes predictions by calculating inner-products related to them. It can provide sufficient privacy and security for the data while deploying different neural network architectures with activation functions that are even non-linear on the remote server. We evaluate our scheme based on the real datasets and provide a comparison with the related schemes. Experimental results demonstrate that our scheme can reduce the computational cost of the whole process while significantly reducing the encryption time. It can obtain an accuracy of over 90% in different network architectures with even non-linear activation functions. Meanwhile, our solution can reduce communication overhead in the whole protocol. |
| |
Keywords: | Neural network prediction Functional encryption Model prediction Inner product encryption |
本文献已被 ScienceDirect 等数据库收录! |
|