AI-Augmented Quality Engineering for MLOps: Intelligent Test Orchestration and Model Reliability on AWS

Authors

  • Lingaraj Kothokatta Test lead, Texas, USA Author

DOI:

https://doi.org/10.15680/IJCTECE.2023.0604011

Keywords:

AI-Augmented Quality Engineering, Model Reliability, MLOps, AWS SageMaker

Abstract

This paper introduces an AI-infused quality engineering model of MLOps, which centers on the intelligent test management and model accountability on AWS. The system automates model artifact validation, hyperparameter validation and inference endpoint validation by making use of AWS SageMaker and EC2. Adaptive regression methods reduce the number of redundant tests, but still have statistical confidence. Latency, resource utilization, and consistency when it is under stress are observable through AWS CloudWatch. The stability of pipelines is checked through the retraining cycles and changing datasets with the help of benchmarking pipelines. The findings demonstrate a higher efficiency of regression, reliability on inferences, and accuracy of monitoring. The framework offers a generalized method to the assurance of AI systems that is scalable and automated and, therefore, allows the powerful regulation of machine learning deployments in dynamic cloud environments.

References

[1] J. Soh and P. Singh, "MLOps: DevOps for machine learning," in Machine Learning Operations, 2020.

[2] B. M. A. Matsui and D. H. Goya, "Five essential steps for effective MLOps implementation," in Proceedings of the International Conference on Software Engineering and Knowledge Engineering, 2022.

[3] M. Testi, M. Ballabio, E. Frontoni, P. Russo, and M. Zingaretti, "MLOps: A taxonomy and a methodology," IEEE Access, vol. 10, pp. 63606–63618, 2022.

[4] S. Moreschini, F. Lomio, H. Hästbacka, and D. Taibi, "An evolvable MLOps pipeline for continuous model evolution," in Proceedings of the International Conference on Software Maintenance and Evolution, 2022.

[5] B. M. A. Matsui and D. H. Goya, "Responsible AI in MLOps: A five-step framework," in Proceedings of the International Conference on Responsible AI, 2022.

[6] S. K. Panda, S. Chakraborty, and A. Kumar, "AI-driven test automation: Techniques, tools, and empirical insights," Journal of Software Engineering Research and Development, vol. 10, no. 3, pp. 1–18, 2022.

[7] P. Pham, T. Nguyen, and L. Tran, "A systematic review of AI/ML techniques in software testing," Software Quality Journal, vol. 30, no. 4, pp. 891–925, 2022.

[8] A. Chatterjee, R. Gupta, and S. Mishra, "Quality assurance gaps in MLOps: An industrial perspective," in Proceedings of the International Conference on Software Quality, Reliability and Security, 2022.

[9] M. Borg, "From notebooks to engineering: Buttresses and rebars for trustworthy AI," in Keynote Proceedings of the International Conference on Software Engineering, 2022.

[10] J. Vemulapati and C. Murtuza, "Intelligent quality assurance orchestration platform," U.S. Patent 10,489,XXX, Nov. 2019.

Downloads

Published

2023-07-13

How to Cite

AI-Augmented Quality Engineering for MLOps: Intelligent Test Orchestration and Model Reliability on AWS. (2023). International Journal of Computer Technology and Electronics Communication, 6(4), 7324-7330. https://doi.org/10.15680/IJCTECE.2023.0604011

Most read articles by the same author(s)