Human–AI Collaboration in Security Operations: Measuring Alert Trust, Automation Bias, and Analyst Upskilling in AI-Augmented SOC Environments

Authors

  • Dr. Alex Mathew Dept. of Cybersecurity, Bethany College, USA Author

DOI:

https://doi.org/10.15680/IJCTECE.2025.0805011

Keywords:

Human–AI collaboration, automation bias, SOC operations, cybersecurity, AI-assisted decision-making

Abstract

The rapid integration of artificial intelligence (AI) in Security Operations Centers (SOCs) has created both opportunities and challenges for cybersecurity teams. This paper explores the effects of the levels of AI automation on trust of the analysts, decision-making quality, and skill development. The study delves into three main areas: alert trust, automation bias, and skill adaptation. Using an experimental SOC simulator, participants had to use AI detection tools that generated alerts with varied prediction accuracy to examine the effects on trust and performance. Findings suggest that moderate automation promotes healthy trust and improved decision-making accuracy, while high automation could induce excessive reliance and less alertness. Feedback also aids independent detection skill and confidence in analysts. These studies illustrate the imperative to design trust-sensitive adaptive training systems that can allow for better analyst-AI collaboration in cybersecurity.

References

1. Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w

2. Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. Journal of Medical Internet Research, 22(6), e15154. https://doi.org/10.2196/15154

3. Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J. M. F., & Eckersley, P. (2020). Explainable machine learning in deployment. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 648–657. https://doi.org/10.1145/3351095.3375624

4. Branley-Bell, D., Whitworth, R., & Coventry, L. (2020). User Trust and Understanding of Explainable AI: Exploring Algorithm Visualisations and User Biases. Lecture Notes in Computer Science, 382–399. https://doi.org/10.1007/978-3-030-49065-2_27

5. Buçinca, Z., Lin, P., Gajos, K. Z., & Glassman, E. L. (2020). Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. Proceedings of the 25th International Conference on Intelligent User Interfaces. https://doi.org/10.1145/3377325.3377498

6. Burton, J. W., Stein, M., & Jensen, T. B. (2019). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155

7. Göndöcs Dóra, Szabolcs Horváth, & Viktor Dörfler. (2025). Uncovering the Dynamics of Human-AI Hybrid Performance: A Qualitative Meta-Analysis of Empirical Studies. International Journal of Human-Computer Studies, 103622–103622. https://doi.org/10.1016/j.ijhcs.2025.103622

8. Fragiadakis, G., Diou, C., Kousiouris, G., & Nikolaidou, M. (2024). Evaluating human-ai collaboration: A review and methodological framework. arXiv preprint arXiv:2407.19098. https://arxiv.org/pdf/2407.19098

9. Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057

10. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., & Vaughan, J. W. (2020). Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. 1–14. https://doi.org/10.1145/3313831.3376219

11. ‌Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376590

12. Mohsen Abbaspour Onari, Grau, I., Zhang, C., Nobile, M. S., & Zhang, Y. (2025). Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support. Communications in Computer and Information Science, 202–222. https://doi.org/10.1007/978-3-032-08327-2_10

13. ‌Okamura, K., & Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. PLOS ONE, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132

14. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873

15. Reverberi, C., Rigon, T., Solari, A., Hassan, C., Cherubini, P., & Cherubini, A. (2022). Experimental evidence of effective human–AI collaboration in medical decision-making. Scientific reports, 12(1), 14952. https://doi.org/10.1038/s41598-022-18751-2

16. Shneiderman, B. (2020, February 10). Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. ArXiv.org. https://doi.org/10.48550/arXiv.2002.04087

17. Smith-Renner, A., Fan, R., Birchfield, M., Wu, T., Boyd-Graber, J., Weld, D. S., & Findlater, L. (2020). No Explainability without Accountability. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376624

18. Vielberth, M., Bohm, F., Fichtinger, I., & Pernul, G. (2020). Security Operations Center: A Systematic Study and Open Challenges. IEEE Access, 8, 227756–227779. https://doi.org/10.1109/access.2020.3045514

19. Zhang, Y., Liao, Q. V., & Bellamy, R. K. E. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 295–305. https://doi.org/10.1145/3351095.3372852

Downloads

Published

2025-10-10

How to Cite

Human–AI Collaboration in Security Operations: Measuring Alert Trust, Automation Bias, and Analyst Upskilling in AI-Augmented SOC Environments. (2025). International Journal of Computer Technology and Electronics Communication, 8(5), 11375-11380. https://doi.org/10.15680/IJCTECE.2025.0805011