Defense Against Free-Rider Attack from the Weight Evolving Frequency

  • Chapter
  • First Online:
Attacks, Defenses and Testing for Deep Learning

Abstract

Federated learning (FL) with multiple clients collaborating to train a federated model without exchanging their individual data is a method of distributed machine learning. Although federated learning has gained an unprecedented success in data privacy preservation, its frailty of vulnerability to “free-rider” attacks attracts increasing attention. A number of defenses against free-rider attacks have been proposed for FL. Nevertheless, these methods may not protect against highly masquerading hitchhikers. Furthermore, when more than 20% of the clients are “hitchhikers”, the effectiveness of their defense may drop dramatically. To tackle these challenges, we reconceptualize the defense problem from a new perspective, i.e., the frequency of model weight evolution. Based on our experience, a new insight is gained that the frequency of model weight evolution is significantly different for free-riders and benign clients during the training process of FL. Motivated by this insight, a novel defense method based on the frequency of model weight evolution is proposed. In particular, a frequency of weight changes during the local training process is first collected. In the case of each client, it takes the WEF-Matrix of the local model and its weight of the model for each iteration and uploads it to the server. The server then separates “free-riders” from virtuous clients based on the difference in the WEF-Matrix. Finally, the broker uses a personalized method to offer different global models to the appropriate clients, thus kee** hitchhikers from obtaining high-value models. The combined experiments on five datasets and five models show that our method defends better than the state-of-the-art baseline and can identify hitchhikers at an early stage. The hitchhikers are identified at an early stage of training. Furthermore, we also verify the effectiveness of our method to adaptive attacks and visualize the WEF-Matrix during training to explain its effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20–22 April 2017, Fort Lauderdale, FL, USA. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR (2017). http://proceedings.mlr.press/v54/mcmahan17a.html

  2. Zhang, C., **e, Y., Bai, H., Yu, B., Li, W., Gao, Y.: A survey on federated learning. Knowl. Based Syst. 216, 106775 (2021). https://doi.org/10.1016/j.knosys.2021.106775

  3. Jiang, C., Xu, C., Zhang, Y.: PFLM: privacy-preserving federated learning with membership proof. Inf. Sci. 576, 288–311 (2021). https://doi.org/10.1016/j.ins.2021.05.077

  4. Wang, F., Zhu, H., Lu, R., Zheng, Y., Li, H.: A privacy-preserving and non-interactive federated learning scheme for regression training with gradient descent. Inf. Sci. 552, 183–200 (2021). https://doi.org/10.1016/j.ins.2020.12.007

  5. Jiang, J., Cui, B., Zhang, C.: Distributed Machine Learning and Gradient Optimization. Springer, Berlin (2022). https://doi.org/10.1007/978-981-16-3420-8

  6. Chen, H.: Reliable and efficient distributed machine learning. Ph.D. thesis, Royal Institute of Technology, Stockholm, Sweden (2022). https://nbn-resolving.org/urn:nbn:se:kth:diva-310374

  7. Long, G., Tan, Y., Jiang, J., Zhang, C.: Federated learning for open banking. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning - Privacy and Incentive, Lecture Notes in Computer Science, vol. 12500, pp. 240–254. Springer (2020). https://doi.org/10.1007/978-3-030-63076-8_17

  8. Shingi, G.: A federated learning based approach for loan defaults prediction. In: Fatta, G.D., Sheng, V.S., Cuzzocrea, A., Zaniolo, C., Wu, X. (eds.) 20th International Conference on Data Mining Workshops, ICDM Workshops 2020, Sorrento, Italy, November 17–20, 2020, pp. 362–368. IEEE (2020). https://doi.org/10.1109/ICDMW51313.2020.00057

  9. Xu, J., Glicksberg, B.S., Su, C., Walker, P.B., Bian, J., Wang, F.: Federated learning for healthcare informatics. J. Heal. Inf. Res. 5(1), 1–19 (2021)

    Article  Google Scholar 

  10. Kuo, T., Pham, A.: Detecting model misconducts in decentralized healthcare federated learning. Int. J. Med. Inf. 158(February), 104658 (2022). https://doi.org/10.1016/j.ijmedinf.2021.104658

  11. Hard, A., Rao, K., Mathews, R., Beaufays, F., Augenstein, S., Eichner, H., Kiddon, C., Ramage, D.: Federated learning for mobile keyboard prediction. CoRR abs/1811.03604 (2018). http://arxiv.org/abs/1811.03604

  12. Yang, T., Andrew, G., Eichner, H., Sun, H., Li, W., Kong, N., Ramage, D., Beaufays, F.: Applied federated learning: improving google keyboard query suggestions. CoRR abs/1812.02903 (2018). http://arxiv.org/abs/1812.02903

  13. Lin, J., Du, M., Liu, J.: Free-riders in federated learning: attacks and defenses. CoRR abs/1911.12560 (2019). http://arxiv.org/abs/1911.12560

  14. Zhao, Z., Huang, J., Roos, S., Chen, L.Y.: Attacks and defenses for free-riders in multi-discriminator GAN. CoRR abs/2201.09967 (2022). https://arxiv.org/abs/2201.09967

  15. Huang, W., Li, T., Wang, D., Du, S., Zhang, J., Huang, T.: Fairness and accuracy in horizontal federated learning. Inf. Sci. 589, 170–185 (2022). https://doi.org/10.1016/j.ins.2021.12.102

  16. Gao, L., Li, L., Chen, Y., Xu, C., Xu, M.: FGFL: A blockchain-based fair incentive governor for federated learning. J. Parallel Distrib. Comput. 163, 283–299 (2022). https://doi.org/10.1016/j.jpdc.2022.01.019

  17. Fraboni, Y., Vidal, R., Lorenzi, M.: Free-rider attacks on model aggregation in federated learning. In: The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13–15, 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 130, pp. 1846–1854. PMLR (2021). http://proceedings.mlr.press/v130/fraboni21a.html

  18. Zong, B., Song, Q., Min, M.R., Cheng, W., Lumezanu, C., Cho, D., Chen, H.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings. OpenReview.net (2018). https://openreview.net/forum?id=BJJLHbb0-

  19. Lyu, L., Xu, X., Wang, Q., Yu, H.: Collaborative fairness in federated learning. In: Federated Learning - Privacy and Incentive, Lecture Notes in Computer Science, vol. 12500, pp. 189–204. Springer (2020). https://doi.org/10.1007/978-3-030-63076-8_14

  20. Sermanet, P., LeCun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: The 2011 International Joint Conference on Neural Networks, IJCNN 2011, San Jose, California, USA, July 31–August 5, 2011, pp. 2809–2813. IEEE (2011). https://doi.org/10.1109/IJCNN.2011.6033589

  21. Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, USA, pp. 202–207. AAAI Press (1996). http://www.aaai.org/Library/KDD/1996/kdd96-033.php

  22. El-Sawy, A., El-Bakry, H.M., Loey, M.: CNN for handwritten arabic digits recognition based on lenet-5. In: Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, AISI 2016, Cairo, Egypt, October 24–26, 2016. Advances in Intelligent Systems and Computing, vol. 533, pp. 566–575 (2016). https://doi.org/10.1007/978-3-319-48308-5_54

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1409.1556

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016, pp. 770–778. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.90

  25. Tolstikhin, I.O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., Lucic, M., Dosovitskiy, A.: Mlp-mixer: an all-mlp architecture for vision. CoRR abs/2105.01601 (2021). https://arxiv.org/abs/2105.01601

  26. Xu, X., Lyu, L.: A reputation mechanism is all you need: collaborative fairness and adversarial robustness in federated learning. In: Proceedings of the ICML Workshop on Federated Learning for User Privacy and Data Confidentiality (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to **yin Chen .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Chen, J., Zhang, X., Zheng, H. (2024). Defense Against Free-Rider Attack from the Weight Evolving Frequency. In: Attacks, Defenses and Testing for Deep Learning. Springer, Singapore. https://doi.org/10.1007/978-981-97-0425-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0425-5_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0424-8

  • Online ISBN: 978-981-97-0425-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation