Search this site
Embedded Files
  • Home
  • About
  • Issues
  • Submissions
 
  • Home
  • About
  • Issues
  • Submissions
  • More
    • Home
    • About
    • Issues
    • Submissions

Back to Vol. 1 No. 1 (2025): Journal of Interdisclipinary Inquiry Table of Contents

Adaptive Differential Privacy in Federated Learning Using Gradiant Sensivity Analysis


Varshith Ulisi & Uma Maheswara Rao Ulisi

Journal of Interdisclipinary Inquiry (2025) 1 (1).

DOI: https://doi.org/10.5281/zenodo.19711109

Open Access


Issue Section: Research

Keywords: federated learning, differential privacy, adaptive privacy mechanisms, gradient sensitivity, privacy-preserving machine learning, decentralized ai, gradient inversion attacks, client-side noise injection, edge computing, information leakage mitigation




Abstract

Federated Learning (FL) is emerging as a robust approach to distributed machine learning where data privacy and decentralized computation are vital. By training models directly on end-user devices and sharing only model updates with a central server, FL offers inherent privacy benefits over traditional centralized training paradigms. However, even without transferring raw data, model updates such as gradients can inadvertently leak sensitive information, compromising user privacy. This paper investigates the potential for privacy leakage in FL systems and introduces a novel adaptive differential privacy mechanism that adjusts noise levels based on the gradient sensitivity of each participating client. We define gradient sensitivity using the L2 norm of each client's model updates and propose a scalable, lightweight, and effective framework that injects privacy-preserving Gaussian noise proportionally. Our contributions include a formal definition of gradient norm sensitivity, a robust adaptive noise injection technique, and an in-depth empirical evaluation on the MNIST dataset. Comparative analysis with FedAvg and fixed-noise DP baselines shows that our method, GS-ADP (Gradient-Sensitive Adaptive Differential Privacy), achieves strong privacy guarantees with minimal accuracy loss. This research contributes a deployable and efficient solution to improve the privacy-utility tradeoff in federated learning architectures, especially for edge-AI and privacy-critical applications. (McMahan et al., 2017; Bonawitz et al., 2019) (Dwork & Roth, 2014; Abadi et al., 2016) (Nasr et al., 2019; Melis et al., 2019)

Authors:

Varshith Ulisi & Uma Maheswara Rao Ulisi


Published 2025-04-23


How to Cite:

Varshith Sai Ulisi, & Ulisi, U. M. rao . (2026). Adaptive Differential Privacy in Federated Learning Using Gradiant Sensivity Analysis. The Journal of Interdisciplinary Inquiry, 1(1). https://doi.org/10.5281/zenodo.19710866

License:

© Copyright 2026 JII.



This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors retain copyright and agree to license their articles with a Creative Commons Attribution (CC BY 4.0) International License.



Reference

    • Abadi, M., Chu, A., Goodfellow, I., et al. (2016). Deep learning with differential privacy. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 308–318).

    • Bonawitz, K., Eichner, H., Grieskamp, W., et al. (2019). Towards federated learning at scale: System design. In Proceedings of the SysML Conference.

    • Brakerski, Z., & Vaikuntanathan, V. (2014). Efficient fully homomorphic encryption from (standard) LWE. SIAM Journal on Computing, 43(2), 831–871.

    • Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407.

    • Geyer, R. C., Klein, T., & Nabi, M. (2017). Differentially private federated learning: A client level perspective. arXiv. https://arxiv.org/abs/1712.07557

    • Hitaj, B., Ateniese, G., & Perez-Cruz, F. (2017). Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 603–618).

    • Jacobs, I. S., & Bean, C. P. (1963). Fine particles, thin films, and exchange anisotropy. In G. T. Rado & H. Suhl (Eds.), Magnetism (Vol. 3, pp. 271–350). Academic Press.

    • McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Proceedings of the International Conference on Artificial Intelligence and Statistics (pp. 1273–1282).

    • Melis, L., Song, C., De Cristofaro, E., & Shmatikov, V. (2019). Exploiting unintended feature leakage in collaborative learning. In Proceedings of the IEEE Symposium on Security and Privacy.

    • Nasr, M., Shokri, R., & Houmansadr, A. (2019). Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the IEEE Symposium on Security and Privacy.

    • Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. In Proceedings of the ACM Conference on Computer and Communications Security (pp. 1310–1321).

    • Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction APIs. In Proceedings of the USENIX Security Symposium.

    • Truex, S., Liu, L., Gursoy, M. E., et al. (2019). A hybrid approach to privacy-preserving federated learning. In Proceedings of the ACM Workshop on Artificial Intelligence and Security (pp. 1–6).

    • Ulisi, U. M. R. (2025). Decentralized intelligence: Democratizing AI development through blockchain and federated learning. In Proceedings of the 3rd International Conference on Artificial Intelligence and Machine Learning Applications (AIMLA) (pp. 1–14). https://doi.org/10.1109/AIMLA63829.2025.11041233

    • Wei, K., Li, J., Ding, M., et al. (2020). Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15, 3454–3469.

    • Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. In Advances in Neural Information Processing Systems, 32.

Related

No related articles available.

The Journal of Interdisciplinary Inquiry promotes the examination of complex questions through diverse methodological and theoretical lenses, fostering connections across disciplines.

Submit



Issues   About   Submissions   Contact 

© Copyright 2026 JII –  All rights reserved. 
Google Sites
Report abuse
Google Sites
Report abuse