Explainable AI (XAI) with Python: Transparent, Responsible, and Sustainable Solutions
DOI:
https://doi.org/10.15680/IJCTECE.2023.0601002Keywords:
Explainable AI (XAI), Python, SHAP, LIME, Shapash, Model Interpretability, Ethical AI, Sustainable AI PracticesAbstract
Explainable Artificial Intelligence (XAI) aims to make machine learning models more transparent and interpretable, fostering trust and accountability. This paper explores the integration of XAI techniques in Python, focusing on their application in building responsible and sustainable AI solutions. By leveraging Python libraries such as SHAP, LIME, and Shapash, we demonstrate how to enhance model interpretability and ensure ethical AI practices
References
1. Lundberg, S. M., & Lee, S. I. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (Vol. 30).
2. Ribeiro, M. T., Singh, S., & Guestrin, C. "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
3. Ribeiro, M. T., Singh, S., & Guestrin, C. Anchors: High-precision model-agnostic explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
4. Prajapati, S. Explainable AI (XAI) — A guide to 5 Packages in Python to Explain Your Models. Medium.Read Medium articles with AI+2Medium+2Towards Data Science+2
5. Wanjantuk, P. Python Tools for Explainable AI (XAI). Medium.Medium
6. Yang, W., Le, H., Laud, T., Savarese, S., & Hoi, S. C. H. (2022). OmniXAI: A Library for Explainable AI. arXiv. arXiv
7. Prajapati, S. Explainable AI (XAI) — A guide to 5 Packages in Python to Explain Your Models. Medium.
8. Wanjantuk, P. Python Tools for Explainable AI (XAI). Medium.