Gen AI on OpenShift Series — Episode 2

Bobby Johns
20 min readSep 14, 2023

--

Building the Solid Foundations for AI Success

Authors: Bobby Johns, David Kypuros, and Jason Nagin

Series Index

In this series, we will be offering our practical advice on implementing Generative AI (GenAI) solutions using containers (Podman) and on a container orchestration platform (OpenShift). In this series, we will cover:

  1. Introduction to Generative AI and Containerization with Podman and OpenShift
  2. Preparing Your AI Lab Environment with Podman and OpenShift
  3. Building Generative AI Containers with Podman
  4. Setting Up a Vector Database with Podman and OpenShift
  5. Leveraging LangChain and CamelK for Business Value
  6. Business Benefits of the AI Lab Environment

In this 6 part series, we will incorporate practical examples, code snippets, and best practices throughout the articles. Additionally, we will emphasize the importance of a well-structured AI lab environment in achieving business objectives, showcasing how each component contributes to these goals.

Episode 2: Preparing Your AI Lab Environment with Podman and OpenShift

Welcome to Episode 2 of our GenAI on OpenShift Series, where we embark on the journey to create a robust AI lab environment. In this episode, we’ll dive deep into the practical aspects of setting up your AI lab for excellence. Whether you’re a Linux pro or new to the world of AI, this article will guide you through the essential steps to ensure your AI initiatives thrive.

Linux SysAdmin Proficiency

Before we dive into the technical details, let’s address the importance of Linux sysadmin proficiency. If you’re already comfortable with configuring a Linux server and handling basic Linux packages, you’re in for an exciting ride. However, if you’re new to Linux, consider partnering with a collaborator or seeking guidance from experienced individuals. Our team members David, Jason, and I did exactly that when we began our GenAI journey collaboratively.

Choosing the Right Hardware and Software Components

Think of your AI lab as a workshop where you assemble your tools and build your projects. In this section, we’ll help you select the hardware and software components that will empower your AI endeavors. From CPUs and GPUs to memory and storage, we’ll guide you through making informed decisions. We’ll also explore the software side, including choosing an operating system, libraries, and frameworks to ensure a seamless AI development experience.

Unlocking the Power of Podman and OpenShift

Meet the dynamic duo: Podman and OpenShift, your allies in containerized AI. Podman brings efficiency and security, enabling you to encapsulate your AI workloads. OpenShift, the orchestration maestro, ensures container harmony, scalability, and manageability. We’ll walk you through the installation and configuration of Podman and OpenShift in your lab environment, providing a strong foundation for AI experimentation and future production workloads.

The Role of a Robust Lab Environment in Business Success

Why invest in a well-structured lab environment? It’s the engine room where your AI initiatives gain traction, where hypotheses are tested and refined. Whether you aim to boost revenue, reduce operational costs, or minimize errors, your lab environment is where these objectives take shape. Throughout this article, we’ll emphasize that a well-structured lab isn’t just a technical necessity; it’s a strategic asset — a launchpad for achieving your business goals armed with the capabilities of AI.

So, gear up for a practical journey into the heart of AI implementation. By the end of this article, you’ll have the foundational knowledge to set up a secure, scalable lab environment with Podman and OpenShift, laying the groundwork for achieving your business goals through the power of AI. Let’s dive in!

If you’re wondering “Why Generative AI? or “Why Podman?” or “Why OpenShift?”, please refer to Episode 1 found here.

Note: We will be providing our very opinionated choices throughout this series, if you are confident in your Linux skills and want to make other choices, feel free, but do so at your own peril. Just be aware that you will have to make adjustments to some commands and examples to match your alternative choices. If you are unsure, we would suggest using our examples and recipes first, and extemporize your own when you have something actually working.

What kind of hardware do I need?

Selecting initial lab hardware is simpler and easier than you think. We would guess you already have a machine or 2 that you use at home in an office environment. Don’t use one that you need for everyday life for email, company intranet, office admin, and such as the lab server. We would suggest starting small and building out your lab systems over time. Here is a suggested minimum build:

- A recent vintage CPU either Intel or AMD
- 64GB RAM
- 1–2 TB Internal SSD
- Simple video card for initial OS install
- USB Drive to load the Install ISO image to/from
(Unless you’re expecting to build LLM models, you don’t have to have a GPU yet)

You can likely get a small tower with a power supply, motherboard, CPU, RAM, Storage, and a simple video card for the same as 50 double-cheese burgers w/ bacon. (Your local prices may vary) Considering what this lab machine will enable you to do, this is pretty inexpensive. I prefer to build out the hardware for Linux systems myself, but you could also buy the equivalent pre-built and ready-to-install an OS (for a little more expense) if building out a hardware system is not your preference.

It may seem like you need a GPU up front, but you really don’t. None of the examples in this series require a GPU. Your AI efforts might benefit from a GPU, but it’s not required.

What OS should I choose?

We’re open-source enthusiasts, and we recommend Fedora for its collaboration and innovation. However, you can choose a Linux distribution you’re comfortable with. Our examples are based on Fedora 38, but they should be adaptable to other distributions with minor adjustments.

We’ll be using Fedora 38 in my examples.

Download Fedora from the Fedora Foundation and follow the install instructions. We would suggest the ISO installation method and use the workstation version.

  • Download an ISO from the Fedora website
  • Install the downloaded ISO to a USB drive using the Fedora Media Writer for your current OS (Mac, Windows, Linux).
  • Start your hardware and be sure to configure your hardware UEFI (or older BIOS) to boot from the USB with the ISO you created above
  • Follow the prompts to install and configure Fedora on your hardware

Once you have installed Fedora and rebooted so you are running from the Fedora system (not the USB drive), feel free to explore Fedora.

Home Networking Considerations

If you’re setting up your lab at home, consider isolating it from your primary network for security and stability. We suggest creating a separate virtual network for your lab to prevent any conflicts or disruptions. Plus if something is broken in the lab environment, it shouldn’t effect your home network. Your family with thank you.

You have a running Fedora system, now what?

Once you have Fedora up and running, it’s time to start building your lab environment. We’ll guide you through installing essential packages, including Cockpit, KVM, Podman, Python, PyCharm, Jupyter Notebooks, and WireGuard. These tools will equip you to manage your lab efficiently and support AI development.

The process here will be to install a package or set of packages, test that the installed component(s) function as expected in some way, and then rinse and repeat until all these required packages are installed and configured.

Time to install all the packages you’re going to need for the lab:

  • Cockpit — Easily manage the lab server remotely
  • KVM Virtual Machines — Easy setup for other lab components
  • Podman — Build & run containers without a daemon or orchestration
  • Python — Data Science Favorite
  • PyCharm — Python Development Favorite
  • Jupyter Notebooks — Data Science Favorite
  • Wireguard — Lab VPN access
  • We will save the OpenShift install until later

Run as just a server or as a workstation?

You can use your lab system as a headless server and access services remotely, eliminating the need for a monitor, keyboard, and mouse. Cockpit, a remote management console, will be your ally in this setup.

Cockpit

Cockpit simplifies remote management of Linux systems, including Fedora. We’ll show you how to install and enable it, allowing you to access your lab server remotely through a web interface.

Click here for more details on Cockpit

Install Cockpit:

sudo dnf install cockpit
sudo systemctl enable --now cockpit.socket

sudo firewall-cmd --add-service=cockpit sudo firewall-cmd --add-service=cockpit --permanent

Use your user login and access the Cockpit console page at http://localhost:9090/ or use the IP address of your lab host instead of localhost.

Cockpit has plugins for most tasks like managing containers, managing virtual machines, networking, and even opening a terminal window in the browser.

Install Podman

Podman is a powerful containerization tool that we’ll use extensively in this series. We’ll guide you through its installation and help you verify that it’s working as expected.

sudo dnf install podman

Once it’s installed, run the ubiquitous “hello-world” image to ensure everything is working:

podman pull hello-world 
podman run hello-world

For more details on using Podman look here. It’s an amazing tool and we will be using it extensively in the coming episodes. For now, it’s installed and functional.

If you would like a simple exercise on creating an app using Podman, look here for an easy Node.js app.

Install/configure KVM — Kernel-based Virtual Machines

KVM is essential for managing virtual machines in your lab. While it’s likely already installed, we’ll explain the key components and how to manage them, either from the command line or through Cockpit.

You can manage KVM from the command line or from the Cockpit console, if you have the Machines plugin for Cockpit enabled. I would suggest looking at the Cockpit console. It’s quite handy if VMs are not your daily concern.

QEMU-KVM: QEMU is the virtualization software that powers KVM. You should install both QEMU and KVM packages.

sudo dnf install qemu-kvm libvirt

Libvirt: Libvirt is a library and management tool for managing virtualization technologies such as KVM. It provides a higher-level interface for interacting with virtual machines.

sudo dnf install libvirt

Libvirt Client Tools: These tools provide a command-line interface for managing virtual machines through Libvirt.

sudo dnf install virt-install virt-viewer

Virtualization Tools: Install additional virtualization tools and dependencies.

sudo dnf install virt-manager

Kernel Modules: Ensure that the necessary kernel modules for KVM are loaded. In most cases, they should already be loaded by default. You can check this by running:

lsmod | grep kvm

If you don’t see any output, KVM may not be enabled in your system’s BIOS or UEFI settings.

Enable and Start Services: You need to enable and start the libvirtd service to manage virtualization:

sudo systemctl enable libvirtd sudo systemctl start libvirtd

After installing these packages and starting the necessary services, you should have a functional KVM virtualization environment on your Fedora system. You can use tools like virt-manager for the graphical management of virtual machines or virsh for command-line management. Make sure your hardware supports virtualization, and virtualization is enabled in your system's BIOS or UEFI settings for optimal performance.

Install Python

Python is a cornerstone of data science and AI. We’ll check if it’s installed and guide you through the installation process, including pip, to ensure you have the necessary tools for Python development.

1. Check Python 3 Installation:

First, check if Python 3 is already installed on your Fedora system by running the following command in a terminal:

python3 -version

If Python 3 is installed, it will display the version number. If not, you’ll need to install it.

2. Install Python 3:

sudo dnf install python3

This command will download and install Python 3 on your system.

3. Check pip Installation:

You can check if `pip` is already installed by running:

pip3 -version

If it’s installed, it will display the version number. If not, you’ll need to install it.

4. Install pip:

To install `pip` for Python 3, you can use the `python3-pip` package:

sudo dnf install python3-pip

This command will install `pip` for Python 3.

5. Verify Python and pip:

After the installation is complete, you can verify that Python 3 and pip are installed correctly:

python3 -version
pip3 -version

These commands should display the version numbers of Python 3 and pip3, respectively.

Now you have Python 3 and pip installed on your Fedora system. You can use pip to install Python packages and libraries for your development or scripting needs.

Install PyCharm Community Edition (CE)

PyCharm is a popular Python development environment. We’ll show you how to install it on Fedora, enabling you to develop Python-based AI applications effectively.

Yes, you can install PyCharm on Fedora using the `dnf` package manager. JetBrains, the company behind PyCharm, provides a repository that you can add to your system to install PyCharm CE easily and keep it up-to-date. You can also install PyCharm CE on you local development laptop or workstation.

Note: PyCharm is an IDE and you need to run it as a workstation app locally or on a VNC session.

Here’s how to install PyCharm CEon Fedora:

1. Open a Terminal:

Open a terminal on your Fedora system. You can do this from the Cockpit console, or the workstation by clicking on the terminal window in the Activities menu. Or by using SSH to connect to the IP address of this machine.

2. Update the Package List:

After adding the repository, update the package list:

sudo dnf update

3. Install PyCharm:

To install PyCharm Community Edition, use the following command:

sudo dnf install pycharm-community

If you prefer the Professional Edition, you can install it with the following command:

sudo dnf install pycharm-professional

5. Launch PyCharm:

You can now launch PyCharm by searching for it in your applications menu or by running it from the command line using the `pycharm-community` or `pycharm-professional` command, depending on which edition you installed.

By using this method, you’ll have an officially supported and up-to-date version of PyCharm installed on your Fedora system. It will also receive updates along with your system’s package updates.

Install Jupyter Notebook (or Jupyter Lab)

Jupyter Notebooks provide an interactive environment for data science. We’ll guide you through the installation of Jupyter Lab and Notebook, allowing you to perform data analysis and develop AI models.

Jupyter Lab or Notebook which is better?

Jupyter Lab and Jupyter Notebook are both interactive computing environments; however, they differ in user interface and customization. Jupyter Notebook offers a classic, tabbed interface with less customization, while JupyterLab provides a modular, flexible interface for a complete IDE experience. JupyterLab integrates a file browser into the interface, while Jupyter Notebook has a separate browser. JupyterLab boasts a richer code console experience with multiple consoles, whereas Jupyter Notebook has a more basic console. JupyterLab has a more extensive extension ecosystem, allowing for diverse workflow enhancements, while Jupyter Notebook’s extension support is more limited. Both support code cells for writing and executing code, with similar core functionality. If you’re just getting started, I’d suggest Jupyter Notebooks to keep it simple.

You can install JupyterLab and Jupyter Notebooks on Fedora using the `pip` package manager, which is the recommended way to install these tools. Here are the steps to install JupyterLab and Jupyter Notebooks:

1. Install JupyterLab and Jupyter Notebooks:

To install JupyterLab and Jupyter Notebooks, you can use pip, which is the Python package manager:

pip install jupyterlab

This command will install Jupyter Lab and its dependencies.

2. Start Jupyter Lab or Jupyter Notebook:

Once the installation is complete, you can start JupyterLab or Jupyter Notebook by running the following command in your terminal:

For Jupyter Lab:

jupyter-lab

For Jupyter Notebook:

jupyter notebook

Running either of these commands will start the Jupyter server, and it will open a web browser with the JupyterLab or Jupyter Notebook interface.

3. Access Jupyter in Your Web Browser:

The previous step will open a web browser with the Jupyter Lab or Jupyter Notebook interface. You can create and manage notebooks, run code, and perform data analysis within this web-based environment.

4. Stop the Jupyter Server:

To stop the Jupyter server, you can press `Ctrl + C` in your terminal. This will shut down the server and close the JupyterLab or Jupyter Notebook interface in your web browser.

Additional Notes:

  • You can create new Jupyter notebooks and start working on them by clicking the “New” button in the JupyterLab or Jupyter Notebook interface.
  • You can manage your Python packages and environments within Jupyter Notebook. For example, you can use the `pip install` command within a notebook cell to install Python packages.
  • JupyterLab is the next-generation user interface for Project Jupyter and offers a more versatile and powerful environment compared to the classic Jupyter Notebook interface.

By following these steps, you will be able to install and use JupyterLab or Jupyter Notebooks on your Fedora system for data analysis, scientific computing, and other interactive Python tasks.

Install Wireguard

WireGuard offers secure VPN access to your lab from remote locations. We’ll explain how to set it up, but you can choose to implement it later in your lab build if you prefer a managed service.

You can leave setting up the VPN until later in the build out of your lab.

If you are not comfortable setting up your own VPN, you can use CloudFlare as a managed service. Or you can leave setting up the VPN until later in the build-out of your lab.

Wireguard offers several advantages:

Simplicity: WireGuard is characterized by its simplicity and minimalistic design, making it easier to configure and maintain.

Security: It employs modern cryptographic techniques, ensuring strong encryption and robust security.

Efficiency: WireGuard is known for its exceptional performance and low overhead, making it suitable for both low-powered devices and high-speed networks.

Cross-platform: It is available on various operating systems, enabling secure communication across different platforms.

Ease of use: WireGuard can be set up with relatively simple configuration files, making it accessible to users who are new to VPNs. To use WireGuard, you typically install the software, create configuration files, and establish secure connections for private and encrypted communication.

To install and configure WireGuard on a Linux-based system, you’ll typically follow these steps. I’ll provide a general outline, but keep in mind that specific instructions may vary depending on your Linux distribution.

1. Check Kernel Support:

First, check if your Linux kernel supports WireGuard. Many modern kernels do, but you may need to install the WireGuard kernel module if it’s not already present. To check for kernel support:

ls /lib/modules/$(uname -r)/kernel/net/wireguard

If you see the output, the WireGuard module is likely present. Otherwise, you may need to install it.

2. Install WireGuard:

Use your Fedora package manager “dnf” to install the WireGuard tools. The package name may vary by distribution:

sudo dnf install wireguard-tools

3. Generate Key Pairs:

Generate a pair of public and private keys for your WireGuard server and clients:

wg genkey | tee privatekey1 | wg pubkey > publickey1
wg genkey | tee privatekey2 | wg pubkey > publickey2

Repeat this step for each device (server and clients).

4. Configure WireGuard:

Create a configuration file for WireGuard, typically located in /etc/wireguard/wg0.conf (You can use any filename, but conventionally, wg0.conf is used for the first interface):

[Interface]
PrivateKey = (server_private_key)
Address = (server_IPv4_address)/(subnet_prefix_length)
[Peer]
PublicKey = (client_public_key)
AllowedIPs = (client_IPv4_address)/(subnet_prefix_length)

For the server, replace (server_private_key) with the server's private key, (server_IPv4_address) with the server's IPv4 address, and (subnet_prefix_length) with the desired subnet prefix length (e.g., 24 for a typical /24 subnet).

For each client, create a similar configuration section, replacing (client_public_key) and (client_IPv4_address) with the client's public key and IPv4 address.

5. Enable IP Forwarding:

Enable IP forwarding on your server to allow packets to flow between WireGuard and the rest of your network:

sudo sysctl -w net.ipv4.ip_forward=1

Make this change permanent by adding it to your /etc/sysctl.conf or /etc/sysctl.d configuration files.

6. Start and Enable WireGuard:

Start and enable the WireGuard service on your server:

sudo systemctl enable wg-quick@wg0
sudo systemctl start wg-quick@wg0

7. Firewall Configuration:

Make sure your server’s firewall allows traffic on the WireGuard interface (wg0) and any other interfaces you intend to use. This varies depending on your firewall software (e.g., iptables, firewalld).

8. Client Configuration:

On each client, create a WireGuard configuration file similar to the server’s config. Make sure to replace (server_public_key) with the server's public key, and (client_private_key) and (client_public_key) with the client's key pair.

9. Start WireGuard on Clients:

On each client, start WireGuard using the configuration file:

sudo wg-quick up /path/to/client-config.conf

10. Test the Connection:

You can now test the WireGuard connection by trying to ping devices on the remote network or access services running on the server.

The above steps provide a general overview of how to install and configure WireGuard. Be sure to consult your distribution’s documentation and WireGuard’s official documentation for specific details and troubleshooting tips relevant to your setup.

Ta-Dah!

With your hardware and software foundation in place, you’re ready to embark on your AI journey. You can start experimenting with Generative AI and learning about AI/ML. But what about scaling and testing containers on OpenShift in your lab? What about running your containers in an OpenShift production environment? Stay tuned for our next section, where we’ll explore installing OpenShift and taking your AI lab to the next level.

Installing OpenShift in the Lab

There are many ways to install and implement OpenShift, but the three smaller-scale lab approaches we would offer are:

  • OpenShift Local (formerly CRC)
  • Single Node OpenShift (SNO)
  • Developers sandbox OpenShift running on the Red Hat Developers site.

Of these, we would recommend you try OpenShift Local first or OpenShift SNO.

Red Hat Developers Program

If you don’t already have a free developer account on the Red Hat Developers website you should. It’s an outstanding resource for developers and technologists. Some key reasons to join the developer program are:

- Access to enterprise-grade software like Red Hat Enterprise Linux and OpenShift

- Rich knowledge base and technical support

- Collaboration and engagement within the Red Hat community

- Stay updated on industry trends and innovations.

- Empowerment for career growth and innovation.

Why OpenShift Local?

OpenShift Local, formerly CodeReady Containers, is a lightweight, developer-centric variant of the OpenShift Container Platform tailored for local development and testing. It typically runs within a virtual machine (VM) and can even be installed on powerful Windows or Mac laptops. Unlike a full-fledged OpenShift cluster with multiple nodes, OpenShift Local consolidates the entire OpenShift environment, encompassing the control plane and worker node, into a single VM. Here are the key highlights of single-node OpenShift:

  • Local Development: OpenShift Local is the go-to choice for developers aiming to experiment, develop, and test applications in a local context before deploying them to a production-ready OpenShift cluster.
  • Resource Efficiency: It efficiently utilizes system resources, making it ideal for laptops and desktops with resource constraints.
  • Developer Tools: OpenShift Local comes pre-loaded with developer tools and the OpenShift CLI (oc), enabling developers to create and manage containerized applications using familiar tools.
  • Container Orchestration: Similar to full OpenShift clusters, single-node OpenShift leverages Kubernetes as its container orchestration platform, offering features like container scheduling, scaling, and application management.
  • Kubernetes and OpenShift Features: Despite its lightweight nature, OpenShift Local supports the vast majority of Kubernetes and OpenShift features, allowing developers to replicate a similar development and testing environment to a full cluster.
  • Use Cases: OpenShift Local is well-suited for scenarios where developers require OpenShift functionality without the complexity of establishing and managing a multi-node cluster. This includes local application development, testing, learning OpenShift concepts, or demonstrating features.
  • Network Isolation: OpenShift Local typically employs a distinct virtual network to segregate the cluster from the host system, enabling multiple clusters to coexist on the same machine without conflicts.
  • Lifecycle Management: Developers can effortlessly create, initiate, halt, and remove OpenShift Local VMs as needed for their development endeavors.
  • Compatibility: OpenShift Local is frequently updated to align with the latest OpenShift releases, ensuring developers can work with the most current platform versions.

To kickstart your OpenShift Local journey, you typically download the distribution from the official Red Hat Developers website and follow the provided installation and setup instructions in the documentation. This streamlines the process of configuring and experimenting with OpenShift on your local machine, offering a valuable tool for containerized application development and testing.

Why OpenShift SNO?

OpenShift SNO offers many of the same advantages as OpenShift Local, making it a compelling choice for various use cases. While the installation process for OpenShift SNO using an ISO may be a bit more intricate than OpenShift Local, the skills required are valuable and align with typical development and administration proficiencies. Here’s what you need to know about Single Node OpenShift (SNO):

  • Independence and Versatility: OpenShift SNO is an all-in-one OpenShift installation that operates autonomously on its dedicated machine. This machine can be a standalone bare-metal server or a virtual machine (VM) effortlessly running on KVM within your lab host environment.
  • Simplified Configuration: You can use Cockpit, a user-friendly web-based interface, to configure the VM and load an ISO file downloaded from Red Hat’s cluster console. This streamlined process eliminates complexities and simplifies the setup.
  • Skill Enhancement: The installation and management of OpenShift SNO require skills that are valuable in various IT roles. These skills include VM configuration, ISO handling, and platform administration — competencies that can boost your proficiency in virtualization and container orchestration.
  • Resource Flexibility: OpenShift SNO’s capability to run on both bare-metal servers and VMs gives you the flexibility to adapt it to your specific hardware and resource availability.
  • Red Hat Developer Benefits: Access to the necessary ISO files for OpenShift SNO installation is conveniently provided through your Red Hat Developer’s login, streamlining the process without complications.

In summary, OpenShift SNO offers a versatile, standalone OpenShift environment that aligns with the skill sets required for broader IT roles. Whether you’re deploying it on a bare-metal server or a virtual machine within your lab host, OpenShift SNO provides flexibility, enhanced skills, and simplified configuration, making it a compelling choice for your containerized application development and testing needs.

What if I don’t want to set up and run OpenShift locally?

While it’s not as versatile and primarily intended for testing, training, and POC purposes, an alternative option is available with the OpenShift Sandbox, provided by the Red Hat Developers website. This pre-configured environment is designed and maintained specifically for your experimentation needs. Although the access period is limited to 30 days, you can download/save your work and initiate a new OpenShift Sandbox when the current one expires. For instructions on getting your personal Red Hat OpenShift sandbox, please refer to this resource.

Summary

In Episode 2, we dove into the critical steps of creating a strong AI lab environment.

We helped you select hardware and software components and introduced you to the dynamic duo of Podman and OpenShift, enabling efficient containerized AI development. A well-structured lab environment is highlighted as a strategic asset for achieving your business goals through AI.

Hardware and Software Setup:

  • Discussed hardware and software components needed for your AI lab.
  • Explored the benefits of using Fedora as the operating system.
  • Discovered how to download and install Fedora.
  • Provided insights into home networking considerations for security and stability.

Essential Software Installation:

  • Installed Cockpit for remote server management.
  • Configured KVM for virtual machine management.
  • Set up Podman for containerization.
  • Installed Python and pip for data science.
  • Installed PyCharm Community Edition for Python development.
  • Installed Jupyter Notebooks (or Jupyter Lab) for interactive data analysis.
  • Explored Wireguard for secure lab VPN access.
  • Explored OpenShift options and installations of Open Local, OpenShift SNO, and a Red Hat-managed OpenShift Sandbox.

Next in the Series — Generative AI on OpenShift Series — Episode 3Building Generative AI Containers with Podman

Episode 3 Topics:

  • Applications of Generative AI Containers
  • Optimizing Container Size
  • Integration with Container Orchestration Platforms
  • Real-world Use Cases and Applications

Helpful links:

Red Hat Developers Website — Excellent source for developers of all stripes. Great articles and focus on creating something useful in a short amount of time.

Podman — Everything you need to run Podman on your local development system, be it Linux, Windows, or Mac OS.

OpenShift, OpenShift AI/ML, and OpenShift Data Science — Power tools for container orchestration and building business tools that provide a competitive advantage.

Epic discussion of GenAI on OpenShift 4. by David Kypuros, Bobby Johns, and Jason Nagin.

--

--

Bobby Johns

Life-long technology geek. Husband of one. Father of two. Open Source Enthusiast and Enterprise Architect