Seamlessly Build, Ship, and Scale AI/ML Applications

Intel
Intel Tech
Published in
6 min readNov 16, 2023

Build, deploy, and scale AI/ML applications.

Photo by Martin Adams on Unsplash

Author : Neethu Elizabeth Simon, IT/ML Senior Software Engineer, Intel

Widespread adoption of machine learning (ML) requires a systematic and efficient approach to building AI/ML pipelines. New tools are needed to streamline the process. Two new open source projects — AI Connect for Scientific Data (AiCSD)*, developed by Intel, and BentoML* — make it easy to build, deploy, and scale AI/ML applications virtually anywhere, helping scientific researchers unlock groundbreaking use cases in chemistry, physics, biology, environmental science, and beyond.

In this post, we’ll provide an overview of AiCSD, explain how BentoML makes it possible to integrate any AI/ML pipeline into AiCSD by simplifying model packaging and share a brief tutorial for developing and deploying your own pipeline.

The full AI/ML pipeline

Before the demo, let’s briefly look at the development phases of an AI/ML pipeline. Complete pipelines need to encompass all cyclic stages of solution development — data collection, annotation, training, and inference.

A complete AI/ML solution development cycle

The inference stage represents the practical usage of trained models in real time to make meaningful inferences, based on both new data and models trained with previously collected data. Building these real-time AI/ML pipelines is complicated, as data workloads, specifically computer vision data, are significantly large and complex and undergo multiple stages of processing. This requires the system to be architected, built, and deployed with several considerations.

Computer vision inference undergoes multiple stages of processing.

Streamline scientific data analysis with AiCSD

In cell analytics and cell therapy, the process of growing, analyzing, and imaging cells involves toxic reagents and requires an intricate process for obtaining basic insights into cell quality and health. Intel recently released AiCSD, an open source solution that takes the manual work out of this process. AiCSD ingests data from scientific instruments like microscopes, automatically connects the data to AI/ML models, and runs edge inferencing, allowing laboratory users to easily and safely analyze data. Using BentoML at the foundation, AiCSD can effortlessly integrate any new AI/ML pipeline.

BentoML provides an easy way to containerize AI/ML pipelines

At the heart of AiCSD is BentoML, an open source tool that simplifies AI/ML pipeline deployment and management for models running at the edge. Bentos are Japanese lunch boxes with separate compartments for appetizers, entrées, and desserts. Using this model as inspiration, BentoML provides development boxes for AI/ML pipelines, letting users package source code, models, Docker* images, model artifacts, and dependencies in a standardized format. By enabling developers to package model components as Docker services, BentoML makes it easy to build, deploy, and scale different types of models at the edge.

By providing standardized model packaging, BentoML allows users to easily package and deploy ML models virtually anywhere. (Source: Ahmed Besbes in Towards Data Science.)

AiCSD uses OpenVINO™ Model Server to support model versioning, monitoring, and logging at the edge. Leveraging the portability of creating Docker images, AiCSD allows you to deploy and manage AI/ML pipelines through any cloud provider or to Kubernetes* clusters via Yatai*, a deployment operator in BentoML that takes its name from the Japanese word for mobile carts that sell food.

How to create BentoML services and integrate them with the AiCSD project

The basic workflow involves four key steps:

· Build your ML pipeline

· Create BentoML service

· Deploy your BentoML container

· Test the APIs through Swagger UI*

Step one: Build your ML pipeline

To avoid any package incompatibility issues, we highly recommend installing a conda* environment via these steps. Create and activate conda by running:

conda create -n bentoml_env
conda activate -n bentoml_env

Define your models and save them locally or in the BentoML local Model Store (for example, bentoml.diffusers.import_model()). This Model Store serves as a management hub for all model tracking and versioning. The AiCSD project uses OpenVINO Model Server to manage and access the models through HTTP or gRPC inference calls.

Step two: Create a BentoML service

Create a service.py file to wrap your AI/ML pipeline, which includes the model and serving logic, such as this sample used in AiCSD:

@svc.api(input=Text(), output=Text()) 
def classify(text: str) -> str:
data = json.loads(text)
print("Decoded json message received by the bentoml service: ",data)
# Call image classification
result = image_classification.classify(grpc_address=data["GatewayIP"], grpc_port=9001, input_name="0", output_name="1463", images_list="image_classification/input_images.txt")
print("json result returned from pipeline: ",result)
return result

Text is expected for both the service input and the service return. Classify() service receives text in JSON format, which is then decoded to obtain parameters that make the gRPC inference calls at port 9001 to the OpenVINO Model Server. Inference results are then sent back as service output.

This service can be used to test model serving and get predictions through HTTP or gRPC requests via Swagger API UI.

Build a bento to package the model and the services through a configuration YAML file containing all relevant build options, such as service port, package dependencies, and requirements file.

An example bento build via a configuration YAML file.

For example, for building a bento for an image classification, AiCSD uses bentoml build.

In the following example of a BentoML build command output, note the bento tags <bento_image_name>:<bento_image_tag>.

Example tags used in a BentoML build command output.

Step three: Deploy your BentoML container

You can deploy your BentoML container using one of two commands.

· To deploy locally, use bentoml serve.

· Deploy them as containers by using:

bentoml containerize <bento_image_name>:<bento_image_tag>
docker run -it –rm -p 3000:3000 <bento_image_name>:<bento_image_tag> serve

Step four: Test the APIs via Swagger UI

Once deployed, verify and test the service by opening Swagger API UI: http://0.0.0.0:3000.

Sample service via Swagger UI.

If you deployed the bento via a service container, you can verify it via Portainer* or by using the command docker ps.

Get started with AiCSD and BentoML

With AiCSD, research scientists can analyze data faster and expedite discoveries. BentoML plays a key role in AiCSD, helping to streamline AI/ML pipeline development, deployment, and management. Thanks to its user friendliness and flexibility, we’re exploring applying BentoML to help build complex AI/ML pipelines for the Intel® open source automated self-checkout reference implementation. Explore the resources below to learn what efficiencies AiCSD and BentoML can bring to your projects.

· Get started with AiCSD:

o Detailed examples of developing with BentoML via the AiCSD project

o Developer article

· Start using BentoML:

o BentoML GitHub repo

o Detailed tutorial

· Learn more about Intel automated self-checkout solutions:

o GitHub repo

o Developer article

About the author

Neethu Elizabeth Simon, IT/ML Senior Software Engineer, Intel

Neethu Elizabeth Simon has vast industrial experience in building smart end-to-end ML solutions. As a member of the Network & Edge Group at Intel, she’s currently focused on building containerized microservices for computer vision-based AI/ML solutions in retail and biopharma/healthcare. Neethu is the recipient of the 2020 Society of Women Engineers Distinguished New Engineer Award for being a powerful technical contributor and advocate for STEM education for improving diversity through educational outreach. Neethu holds a master’s in computer science from Arizona State University and is passionate about sharing her learnings with others. Connect with her on LinkedIn and GitHub.

--

--

Intel
Intel Tech

Intel news, views & events about global tech innovation.