Trust and Accountability in Agentic AI Systems

Authors

  • Arnav Chabbra Author

DOI:

https://doi.org/10.15680/IJCTECE.2025.0802008

Keywords:

Agentic AI, Trust level, System accuracy, Fairness score, Reliability score, Transparency score

Abstract

This Article examines how explainability, fairness and reliability contribute to user trust in agentic AI systems. The study shows how these guiding principles can be integrated into AI technology to promote transparency and accountability and end up with increased acceptance and dependability. Using the context of real-world case studies, including autonomous vehicles and AI in healthcare, the work identifies the issues and possibilities in incorporating these aspects into AI systems. The study is based on a mixed-method approach of qualitative analysis and data-based evaluation criteria to measure the processes of trust-building. Important conclusions are that the absence of transparency and fairness may be a serious blow to user trust, and systems that take explainability and accountability into consideration are more adopted and satisfactory. The paper ends by giving recommendations to the developers of AI to make emphasis in these areas to make the AI systems trustworthy so as to foster more adaptive and user-friendly AI systems.

References

1. Akinsuli, O. (2021). The rise of AI-enhanced Ransomware-as-a-Service (RaaS): A new threat frontier. World Journal of Advanced Engineering Technology and Sciences, 1(2), 85–97. https://wjaets.com/content/rise-ai-enhanced-ransomware-service-raas-new-threat-frontier

2. Akinsuli, O. (2022). AI and the Fight Against Human Trafficking: Securing Victim Identities and Disrupting Illicit Networks. Iconic Research And Engineering Journals, 5(10), 287-303.

3. Akinsuli, O. (2023). The Complex Future of Cyberwarfare - AI vs AI. Journal of Emerging Technologies and Innovative Research, 10(2), 957-978.

4. Akinsuli, O. (2024). AI-Powered Supply Chain Attacks: A Growing Cybersecurity Threat. Iconic Research And Engineering Journals, 8(1), 696-708.

5. Akinsuli, O. (2024). AI Security in Social Engineering: Mitigating Risks of Data Harvesting and Targeted Manipulation. Iconic Research And Engineering Journals, 8(3), 665-684.

6. Akinsuli, O. (2024). Securing AI in Medical Research: Revolutionizing Personalized and Customized Treatment for Patients. Iconic Research And Engineering Journals, 8(2), 925-941.

7. Akinsuli, O. (2024). Securing the Driverless Highway: AI, Cyber Threats, and the Future of Autonomous Vehicles. Iconic Research And Engineering Journals, 8(2), 957-970.

8. Akinsuli, O. (2024). Traditional AI vs generative AI: The role in modern cyber security. Journal of Emerging Technologies and Innovative Research (JETIR), 11(7), 431-447. https://www.jetir.org/papers/JETIR2407842.pdf

9. Akinsuli, O. (2024). Using AI to Combat Cyberbullying and Online Harassment in North America (Focus on USA). International Journal of Emerging Technologies and Innovative Research, 11(5), 276-299.

10. Akinsuli, O. (2024). Using Zero Trust Security Architecture Models to Secure Artificial Intelligence Systems. Journal of Emerging Technologies and Innovative Research, 11(4), 349-373.

11. Chawande, S. (2024). AI-driven malware: The next cybersecurity crisis. World Journal of Advanced Engineering Technology and Sciences, 12(01), 542-554. https://doi.org/10.30574/wjaets.2024.12.1.0172

12. Chawande, S. (2024). Insider threats in highly automated cyber systems. World Journal of Advanced Engineering Technology and Sciences, 13(02), 807-820. https://doi.org/10.30574/wjaets.2024.13.2.0642

13. Chawande, S. (2024). The role of Artificial Intelligence in cybersecurity. World Journal of Advanced Engineering Technology and Sciences, 11(02), 683-696. https://doi.org/10.30574/wjaets.2024.11.2.0014

14. Chawande, S. (2025). Adversarial machine learning and securing AI systems. World Journal of Advanced Engineering Technology and Sciences, 15(01), 1344-1356. https://doi.org/10.30574/wjaets.2025.15.1.0338

15. Chawande, S. (2025). Quantum computing threats to cybersecurity protocols. World Journal of Advanced Engineering Technology and Sciences, 15(02), 707-720. https://doi.org/10.30574/wjaets.2025.15.2.0546

16. Lindgren, H. (2024). Emerging roles and relationships among humans and interactive AI systems. International Journal of Human-Computer Interaction, 1–23. https://doi.org/10.1080/10447318.2024.2435693

17. Nguyen, T. H., Saghir, A., Tran, K. D., Nguyen, D. H., Luong, N. A., & Tran, K. P. (2024). Safety and reliability of artificial intelligence systems. Springer Series in Reliability Engineering, 185–199. https://doi.org/10.1007/978-3-031-71495-5_9

18. Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094

19. Zhou, J., Theuermann, K., Chen, F., & Holzinger, A. (2022). Fairness and explanation in AI-informed decision making. Machine Learning and Knowledge Extraction, 4(2), 556–579. https://doi.org/10.3390/make4020026

20. Nalage, P. (2025). Ethical Frameworks for Agentic Digital Twins: Decision-Making Autonomy vs Human Oversight. Well Testing Journal, 34(S3), 206-226.

Downloads

Published

2025-03-08

How to Cite

Trust and Accountability in Agentic AI Systems. (2025). International Journal of Computer Technology and Electronics Communication, 8(2), 10380-10387. https://doi.org/10.15680/IJCTECE.2025.0802008