Abstract
Federated learning provides a privacy-preserving mechanism for multiple participants to collaboratively train machine learning models without exchanging private data with each other. Existing federated learning algorithms can aggregate CNN models when the dataset is horizontally partitioned, but cannot be applied to vertically partitioned datasets. In this work, we demonstrate the image classification task in the vertical federated learning setting where each participant holds incomplete image pieces of all samples. We propose an approach called VFedConv to solve this problem and achieve the goal of training CNN models without revealing raw data. Different from traditional federated learning algorithms sharing model parameters in each training iteration, VFedConv shares hidden feature maps. Each client creates a local feature extractor and transmits the extracted feature maps to the server. A classifier model at the server-side is constructed with extracted feature maps as input and labels as output. Furthermore, we put forward the model transfer method to improve final performance. Extensive experiments demonstrate that the accuracy of VFedConv is close to the centralized model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Qiang, Y., Yang, L., Tianjian, C., Yongxin, T.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10, 1–19 (2019)
Jason, Y., Jeff, C., Yoshua, B., Hod, L.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)
Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., **, Z.: How transferable are neural networks in NLP applications? ar**v preprint ar**v:1603.06111 (2016)
He, C., Annavaram, M., Avestimehr, S.: Group knowledge transfer: federated learning of large CNNs at the edge. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 6–12 December 2020. Virtual
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)D
Wu, Y., Cai, S., **ao, X., Chen, G., Ooi, B.C.: Privacy preserving vertical federated learning for tree-based models. Proc. VLDB Endow. 13(11), 2090–2103 (2020)
Karen, S., Andrew, Z.: Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:1409.1556 (2014)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. ar**v preprint ar**v:1806.00582 (2018)
Adrià , G., et al.: Secure Linear Regression on Vertically Partitioned Datasets. IACR Cryptol. ePrint Arch. 2016, p. (2016)
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: A client level perspective. ar**v preprint ar**v:1712.07557 (2017)
Hu, Y., Niu, D., Yang, J., Zhou, S.: FDML: a collaborative machine learning framework for distributed features. In: 2019 Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2232–22402019)
Cheng, K, Fan, T., **, Y., Liu ,Y., Chen, T., Papadopoulos, D., Yang, Q.: Secureboost: A lossless federated learning framework. ar**v preprint ar**v:1901.08755 (2019)
Liu, Y., Liu, Y., Liu, Z., Liang, Y., Meng, C., Zhang, J., Zheng, Y.: Federated forest. IEEE Trans. Big Data (2020)
Ohrimenko, O., et al.: Oblivious multi-party machine learning on trusted processors. In: 25th USENIX Security Symposium (USENIX Security 16), pp. 619–636 (2016)
Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving Jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part VI. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18 (2017)
Yunhong, H., Liang, F., Guo**, H.: Privacy-preserving SVM classification on vertically partitioned data without secure multi-party computation. In: 2009 Fifth International Conference On Natural Computation, vol. 1, pp. 543–546 (2009)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. ar**v preprint ar**v:1503.02531 (2015)
Liu, Y., Zhang, X., Wang, L.: Asymmetrically Vertical Federated Learning. ar**v preprint ar**v:2004.07427 (2020)
Lu, H., Wang, L.: User-oriented data privacy preserving method for federated learning that supports user disconnection. Netinfo Secur. 21(3), 64–71 (2021)
Wang, R., Ma, C., Wu, P.: An intrusion detection method based on federated learning and convolutional neural network. Netinfo Secur. 20(4), 47–54 (2020)
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (No. 61876019).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sha, T., Yu, X., Shi, Z., Xue, Y., Wang, S., Hu, S. (2021). Feature Map Transfer: Vertical Federated Learning for CNN Models. In: Tan, Y., Shi, Y., Zomaya, A., Yan, H., Cai, J. (eds) Data Mining and Big Data. DMBD 2021. Communications in Computer and Information Science, vol 1454. Springer, Singapore. https://doi.org/10.1007/978-981-16-7502-7_4
Download citation
DOI: https://doi.org/10.1007/978-981-16-7502-7_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-7501-0
Online ISBN: 978-981-16-7502-7
eBook Packages: Computer ScienceComputer Science (R0)