Explainable AI for Software Security Auditing
DOI:
https://doi.org/10.15680/IJCTECE.2025.0804005Keywords:
Transparent AI, Software security, automated review, compliance checking, explainable AI, trust building, decision making, security auditing, vulnerability detectionAbstract
The growing adoption of Artificial Intelligence (AI) in software security auditing has raised questions about the transparency and interpretability of AI-based decisions. This paper explores the non-explainability of the existing AI-based auditing systems and proposes ways to fill this gap with Explainable AI (XAI). The research aims to enhance trust and reliability in automated code reviews and compliance checks by integrating transparent AI models. The article examines XAI techniques like LIME and SHAP and compares their usefulness in enhancing transparency and vulnerability identification. The case study and real-world data obtained depict quantifiable gains in trust, accuracy, and auditing efficiency with quantifiable measures, such as [insert data, e.g., percentage improvements in detection rates, decrease in false positives]. The study can help develop AI tools to audit software security, making them more responsible and reliable, and provide insight into the future of transparent AI usage in automated security systems.
References
1. Abdussalam Aljadani, B., Alharthi, B., Farsi, M., Hossam Magdy Balaha, M., Badawy, M., & Elhosseini, M. A. (2023). Mathematical Modeling and Analysis of Credit Scoring Using the LIME Explainer: A Comprehensive Approach. Mathematics, 11(19), 4055–4055. https://doi.org/10.3390/math11194055
2. Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/access.2018.2870052
3. Bader Aldughayfiq, A., Ashfaq, F., Jhanjhi, N. Z., & Humayun, M. (2023). Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP. Diagnostics, 13(11), 1932–1932. https://doi.org/10.3390/diagnostics13111932
4. Jabeen, G., Rahim, S., Afzal, W., Khan, D., Khan, A. A., Hussain, Z., & Bibi, T. (2022). Machine learning techniques for software vulnerability prediction: a comparative study. Applied Intelligence. https://doi.org/10.1007/s10489-022-03350-5
5. Kirpitsas, I. K., & Pachidis, T. P. (2022). Evolution towards Hybrid Software Development Methods and Information Systems Audit Challenges. Software, 1(3), 316–363. https://doi.org/10.3390/software1030015
6. Mohammed, N. M., Niazi, M., Alshayeb, M., & Mahmood, S. (2017). Exploring software security approaches in software development lifecycle: A systematic mapping study. Computer Standards & Interfaces, 50, 107–115. https://doi.org/10.1016/j.csi.2016.10.001
7. Nalage, P. (2025). Enhancing transparency in Cloud-Based machine learning through explainable AI frameworks AUTHOR: PRATIK NALAGE. Researchgate.
8. Nalage, P. (2024). A Hybrid AI Framework for Automated Software Testing and Bug Prediction in Agile Environments. International Journal of Communication Networks and Information Security, 16(3), 758-773.
9. Niteen, N., & Kurian, S. M. (2023). Exploring Explainable AI, Security and Beyond: A Comprehensive Review. International Journal on Emerging Research Areas, 3(2). https://ijera.in/index.php/IJERA/article/view/5
10. Paidy, P. (2023). Adaptive Application Security Testing with AI Automation. International Journal of AI, BigData, Computational and Management Studies, 4, 55–63. https://doi.org/10.63282/3050-9416.ijaibdcms-v4i1p106
11. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873

