Software Engineering of Adversarially Robust AI Systems

Authors

  • Harsha Reddy Walmart, USA Author

DOI:

https://doi.org/10.15680/IJCTECE.2025.0804004

Keywords:

Adversarial Robustness, AI Security, Defensive Approaches, Software Engineering, Cyber Attacks, Model Performance, Adversarial Training, Gradient Masking, AI Defense, Scalable Systems

Abstract

The paper examines the software engineering techniques that are required to build adversarial-robust AI systems. Its main goal is to find and determine development practices that can improve the resilience of AI applications to adversarial cyberattacks. The study is based on a mixed-method research strategy and includes case studies, experimental research, and assessments of several defensive strategies. The core findings demonstrate that it is challenging to find a balance between model robustness and model accuracy and that the most successful defensive techniques, such as adversarial training or gradient masking, can be used to mitigate attacks. However, trade-offs exist between efficiency and capabilities in the system and requirements to have optimized solutions that are also scalable in the study. It demonstrates that active defense combination, modularity, and rigorous testing are the concepts of software engineering playing such an outstanding role. In conclusion, this study provides insights that can be useful to developers and researchers in developing AI models with high resilience to adversarial threats without damaging their performance.

References

1. Ahmed, S. Q., Ganesh, B. V., Kumar, S. S., Mishra, P., Anand, R., & Akurathi, B. (2024). A comprehensive review of adversarial attacks on machine learning. ArXiv.org. https://arxiv.org/abs/2412.11384

2. Bhambri, S., Muku, S., Tulasi, A., & Balaji, B. A. (2019). A survey of black-box adversarial attacks on computer vision models. ArXiv.org. https://arxiv.org/abs/1912.01667

3. Chen, T., Liu, J., Xiang, Y., Niu, W., Tong, E., & Han, Z. (2019). Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecurity, 2(1). https://doi.org/10.1186/s42400-019-0027-x

4. Lakhanpal, S., Devi, R., Aravinda K, Jain, S. K., Adnan, M. M., & Kumar, A. (2024). Designing explainable defenses against sophisticated adversarial attacks. 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT), 2, 280–285. https://doi.org/10.1109/csnt60213.2024.10546022

5. Liang, Y., Liu, H., Shi, Z., Song, Z., Xu, Z., & Yin, J. (2024). Conv-Basis: A new paradigm for efficient attention inference and gradient computation in transformers. ArXiv.org. https://arxiv.org/abs/2405.05219

6. Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Proceedings of Machine Learning Research, 274–283. http://proceedings.mlr.press/v80/athalye18a.html

7. Matalqah, I., Shatanawi, M., Alatawneh, A., & Mészáros, F. (2022). Impact of different penetration rates of shared autonomous vehicles on traffic: Case study of Budapest. Transportation Research Record: Journal of the Transportation Research Board, 036119812210955. https://doi.org/10.1177/03611981221095526

8. Nalage, P. (2025a). A Comparative Study of XAI Methods for Interpretable Decision Making in Cloud-Based ML Services AUTHOR: PRATIK NALAGE. Researchgate. https://www.researchgate.net/publication/393334043_A_Comparative_Study_of_XAI_Methods_for_Interpretable_Decision-Making_in_CloudBased_ML_Services_AUTHORPRATIK_NALAGE

9. Rantalaiho, V. (2024). Technical implementation and operational enhancements of a vulnerability management tool in an organization. Theseus.fi. http://www.theseus.fi/handle/10024/851234

10. Rodriguez, D., Nayak, T., Chen, Y., Krishnan, R., & Huang, Y. (2022). On the role of deep learning model complexity in adversarial robustness for medical images. BMC Medical Informatics and Decision Making, 22(S2). https://doi.org/10.1186/s12911-022-01891-w

11. Nalage, P. (2025). Ethical Frameworks for Agentic Digital Twins: Decision-Making Autonomy vs Human Oversight. Well Testing Journal, 34(S3), 206-226.

12. Zhou, S. K., Greenspan, H., Davatzikos, C., Duncan, J. S., van Ginneken, B., Madabhushi, A., Prince, J. L., Rueckert, D., & Summers, R. M. (2021). A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proceedings of the IEEE, 109(5), 1–19. https://doi.org/10.1109/jproc.2021.3054390

Downloads

Published

2025-07-01

How to Cite

Software Engineering of Adversarially Robust AI Systems. (2025). International Journal of Computer Technology and Electronics Communication, 8(4), 11026-11034. https://doi.org/10.15680/IJCTECE.2025.0804004