Leveraging cloud-native microservices, containers, and serverless architectures for artificial intelligence pipelines

Authors

Phanish Lakkarasu
Senior Site Reliability Engineer, Qualys, Foster City, CA 94404 USA

Synopsis

AI pipeline automation can be described as orchestrating the machine learning or deep learning life cycle with a software suite. These software suites could be open-source popular tools such as Kube Flow or MLflow, commercial services by platform providers such as AWS SageMaker and Azure ML, or something between which is a solution built on top of orchestration tools such as Apache Airflow or Google Cloud Workflows. These AI pipelines can provide benefits such as organized code, flexible workflows, and easy reproducibility. In machine learning, being able to reproduce your work (and someone doing the same) is crucial in both debugging and research.

AI pipelines could also refer to the API for specific cloud-based AI services such as Google Cloud Vision or Azure's various Cognitive Services. The more granular the services being offered, the easier the life for most data scientists and engineers. Tasks such as image classification, object detection, image segmentation, optical character recognition, natural language processing, and so on, can be achieved with one API call and done in seconds. Often, the actual solution is calling the specific service and using a small amount of data to build specialized models. These specialized models can then be leveraged in conjunction with the data strategies used in traditional machine learning to achieve better quality predictions (Lakshman & Malik, 2010; Bernstein, 2014; Breck et al., 2017).

All models referred to in this talk are stored in git repositories. Machine learning projects require team members to collaborate and be able to refer back to models used to generate high-quality predictions. You need the project type controls of using code without the bottlenecks of coding. These clear project goals and organization are model management systems. MLOps and other associated processes is the answer to reducing the timeline between idea and implementation, creating annotated models, and having the ability to reproduce.In recent years there have been revolutionary advances in cloud technology, allowing parties to create portable, dynamic, lightweight, and optimized applications without the headaches of typical deployment and operations. APaaS leverages cloud-native microservices, containers, and serverless architectures to enable agile and industrial strength enterprise-grade AI/ML pipelines for intelligent applications. Business units can supplement the core AI/ML competencies of the central teams by enabling these self-service enterprise pipelines, eliminating most of the enterprise pain points while allowing central teams to continue to dedicate their focus on innovation and expert models (Zaharia et al., 2010; Sato & Takahashi, 2020).

Downloads

Published

6 June 2025

How to Cite

Lakkarasu, P. . (2025). Leveraging cloud-native microservices, containers, and serverless architectures for artificial intelligence pipelines. In Designing Scalable and Intelligent Cloud Architectures: An End-to-End Guide to AI Driven Platforms, MLOps Pipelines, and Data Engineering for Digital Transformation (pp. 137-146). Deep Science Publishing. https://doi.org/10.70593/978-93-49910-08-9_11