Artificial Intelligence (AI) is one of the fastest-growing and most popular technologies being used around the world nowadays. For you to have an idea, the number of businesses using AI grew 270% in the past six years , and the global AI market is expected to reach $267 billion by 2027 . Most of the applications we use in our daily life are powered by AI, such as Alexa, Spotify, Netflix, Siri, etc. Each AI application is developed to perform specific tasks, such as facial recognition, autonomous driving, recommendation systems, health applications, and speech modeling, for example.
An AI application is normally composed of some components of intelligence , namely (i) learning, (ii) reasoning, (iii) problem-solving, (iv) perception, and (v) language-understanding. To make the implementation of these components easier, the separation of concerns design principle should be applied in order to make the application more modular. If this principle is not applied, then the application is built in a monolith approach.
In a monolith approach, each component is developed and updated separately. That is recurrently for different components, which increases the overall cost of development and deployment. To minimize this cost, an AI application could be built into a group of smaller services, named microservices. And this is what this article is about. How AI applications can benefit from microservices?
So, as I was saying, maintaining an AI application can be very costly depending on the way it was built. Therefore, I believe that the better architecture for an AI application is the microservices architecture. The microservices architecture overcomes the challenges faced by monolithic applications due to the application growth and its complexity. The basic idea of microservices is to develop applications as small independent services that can be developed in different programming languages -- therefore, giving more flexibility to the team-- and that can be managed and updated independently. Additionally, microservices enable quick and reliable deployment of large and complex applications .
So, let's take as an example an AI application for healthcare. Normally, it will be necessary to do data processing, data aggregation, and data transformation as initial steps. If the application is built as a microservice, each one of those tasks could be a service that can be developed, deployed, and maintained individually.
As suggested by Naveen Joshi , there are three main advantages of deploying AI applications as microservices.
Improving scalability: It is very challenging to create an AI model that can scale. Additionally, AI applications are normally built using a different set of programming languages and data science tools. Microservices can help to address those problems! With microservices, each service can be built in different programming languages, and services can be easily added, therefore improving the scalability.
Increasing deployment speed: If a company needs to do a small change in the model, the company doesn't need to update and deploy the entire model if they use microservices. With microservices, only the part that was changed needs to be redeployed. Besides increasing deployment speed, the entire model doesn't need to be retested, only the service that changed :-)
Improving fault-isolation: If the application is developed as a monolithic application and if the model fails, then the entire application fails. Microservices help to improve fault-isolation. That is, the failure of one service doesn't affect the rest of the application.
Sounds good to use microservices to deploy AI applications, eh? So, now you might be wondering where to start developing your AI application with microservices. The good news is that there are a lot of frameworks that can help with that. For example, Kousiouris et al. have created a microservices framework for integrating IoT management platforms and AI service for supply chain management . Additionally, libraries such as NLTK, scikit-learn, and pandas, and help you to build your own microservices.
As we saw, microservices can help to develop and to deploy AI applications. What about the other way around? Can AI help microservices? The answer is yes! With the increasing growth of services and their complexities, it can be hard for humans to optimize and manage microservices to their maximum, and AI can help with that. For you to have an idea of the problem, a simple application with few services can have a huge number of permutations of resources and parameters, which can be difficult for humans to choose the best configuration. So, AI comes into place! AI-driven platforms can analyze millions of combinations of configurations in order to identify the best combination of parameters and resources time-fashionably .
An idea to address this problem in practice is to incorporate AI into microservices frameworks. For example, the TARS microservices framework could be powered by AI in such a way that if the machinery resources are not enough, then TARS could smoothly get more machines and distribute the workload. If you have any other ideas on how to incorporate AI into microservices and would like to discuss it or to contribute to such types of innovations, feel free to star and fork the TARS GitHub repository.
TLDR-summary: AI applications can be developed and deployed as microservices in order to minimize maintenance costs and scalability problems. Additionally, AI can help microservices by optimizing the best combination of parameters and resources.
About the author:
Isabella Ferreira is an Ambassador at TARS Foundation, a cloud-native open-source microservice foundation under the Linux Foundation.
 Kousiouris, George, et al. "A microservice-based framework for integrating IoT management platforms, semantic and AI services for supply chain management." ICT Express 5.2 (2019): 141-145.