How to run OpenVINO™ on a Linux AI PC

Benefit from CPU, GPU, and NPU

Adrian Boguszewski
OpenVINO-toolkit
4 min read1 day ago

--

Authors: Adrian Boguszewski and Zhuo Wu— AI Software Evangelists, Intel

What is an AI PC, and why does it have a special name?

AI PC is currently a hot topic. Unlike a regular PC, an AI PC comes with advanced hardware like powerful GPUs that make it super fast at running AI model inferencing, especially for Gen AI inferencing with heavy computations, and more importantly, specialized AI chips called Neural Processing Unita (NPUs). The NPU is optimized for low power consumption while maintaining high computational performance, which makes it excel in handling specific AI workloads such as background blurring, noise removal, and other tasks that run continuous inferencing without draining the battery power quickly. So, if you need a computer that can handle heavy AI work or manage complex, multi-threaded AI applications more effectively without breaking a sweat, that’s what an AI PC with Intel® Core™ Ultra processor has been built for. Please refer to this AI PC blog for more information on AI PC.

Verify device availability in OpenVINO™

OpenVINO™ is an open-source AI toolkit for AI model optimization, inference acceleration, and easy deployment over multiple hardware platforms. With OpenVINO, AI model inferencing could be easily run on an AI PC's CPU, GPU, and NPU. Before that, let’s check the device availability of a Linux AI PC in OpenVINO. For the sake of this article, we will use Ubuntu 24.04 as this is the latest Ubuntu LTS version.

Let’s start with the fresh Ubuntu installation. Then, create a virtual environment and install OpenVINO with the following commands:

python3 -m venv venv
source venv/bin/activate
pip install openvino

Run this code in your Python interpreter to verify what devices are available for you.

import openvino as ov
core = ov.Core()
print(core.available_devices)

At this point, you should have access to the CPU only. That’s not good! After all, your laptop has also GPU and NPU! Let’s fix that, then.

How to enable all devices on Linux

We are going to start with the GPU. The first step is to install all drivers and necessary packages. To do so, you should add an Intel package repo signed with the GPG key. There are other approaches also (you can check them here), but this one is the simplest for installation and future updates.

sudo apt update
sudo apt install -y gpg-agent wget

wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | sudo gpg - yes - dearmor - output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64,i386 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | sudo tee /etc/apt/sources.list.d/intel-gpu-jammy.list

sudo apt update

Ok. It’s time to install everything we will need for the GPU inference.

sudo apt install -y \
intel-opencl-icd intel-level-zero-gpu level-zero \
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \
libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \
libglapi-mesa libgles2-mesa-dev libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \
mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo

Reboot after this step.

sudo reboot

Let’s run the below Python code again.

import openvino as ov
core = ov.Core()
print(core.available_devices)

What you should see now are the CPU and GPU. So, the only missing device is NPU. oneTBB is a dependency for intel-driver-compiler-npu so it must be installed first.

sudo apt install libtbb12

After that, we need to download and install NPU drivers and all related packages.

wget https://github.com/intel/linux-npu-driver/releases/download/v1.5.0/intel-driver-compiler-npu_1.5.0.20240619-9582784383_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.5.0/intel-fw-npu_1.5.0.20240619-9582784383_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.5.0/intel-level-zero-npu_1.5.0.20240619-9582784383_ubuntu22.04_amd64.deb
wget https://github.com/oneapi-src/level-zero/releases/download/v1.17.2/level-zero_1.17.2+u22.04_amd64.deb

sudo dpkg -i *.deb

We also need to assign the right group and permissions to our accelerator (NPU), and add our user to the render group (please replace <your-user-name> with your user name).

sudo bash -c "echo 'SUBSYSTEM==\"accel\", KERNEL==\"accel*\", GROUP=\"render\", MODE=\"0660\"' > /etc/udev/rules.d/10-intel-vpu.rules"
sudo usermod -a -G render <your-user-name>

The last reboot to apply all changes and…

sudo reboot

let’s check the available devices once again.

import openvino as ov
core = ov.Core()
print(core.available_devices)

And CPU, GPU, and NPU should be visible now! Have fun!

In case of any issues with the NPU visibility, please visit this page.

Notices & Disclaimers

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

--

--