AI-Augmented Network Security Using Large Language Models for Policy Intelligence and Misconfiguration Detection
DOI:
https://doi.org/10.15680/IJCTECE.2025.0804008Keywords:
Network Security, Large Language Models, Policy Misconfiguration, Zero Trust, SASEAbstract
Modern enterprises use firewalls, SASE, cloud IAM and micro‑segmentation to control access, producing large, complex policy sets. Most security incidents still arise from misconfigurations, which traditional static tools struggle to detect when errors are semantic, cross‑layer or caused by drift. This paper proposes an AI‑augmented framework that uses Large Language Models with a unified policy representation and knowledge graph to detect misconfigurations, conflicts and drift, and to explain remediations. Experiments on enterprise‑style datasets show substantially higher detection accuracy and clearer explanations than conventional tools.
References
[1] AWS, “Identity and Access Management Documentation,” Amazon Web Services, Online.
[2] BKSP K. R. Alluri, G. Geethakumari, “A Digital Forensic Model for Introspection of Virtual Machines in Cloud Computing,” IEEE, 2015.
[3] S. Al‑Saba, A. Gherbi, “Machine Learning Approaches to Cloud Configuration Error Detection,” Journal of Cloud Computing.
[4] OpenAI, “Language Models are Few‑Shot Learners,” Proc. NeurIPS, 2020.
[5] T. Chen et al., “Using Large Language Models for Infrastructure as Code Security,” arXiv preprint.
[6] NIST, “Zero Trust Architecture,” NIST Special Publication 800‑207.
[7] J. Sherry et al., “Verifying Network‑Wide Invariants in Real Time,” Proc. NSDI.
[8] M. Canini et al., “A NICE Way to Test OpenFlow Applications,” Proc. NSDI.

