Intel® Innovation 2023: AI and OpenVINO™ Take Center Stage

OpenVINO™ toolkit
OpenVINO-toolkit
Published in
5 min readJan 2, 2024

When the Intel® Innovation event series first kicked off back in 2021, the Intel® OpenVINO™ toolkit was largely unknown to most AI developers. But fast-forward to today and OpenVINO was everywhere at Intel Innovation 2023. It was in Pat Gelsinger’s keynote — with tons of examples on how OpenVINO works to solve real-world problems. And it was all over the show floor as Intel ecosystem partners demonstrated how they use OpenVINO to do things like track customer behavior, revolutionize in-store security, and transform worker safety.

It’s so amazing to see all the progress OpenVINO has made in the past five years and how the Intel team continues to make AI more accessible across workloads that impact industries across the globe.

Let’s look at some of the event announcements and technologies that highlight the importance of OpenVINO and the next generation of AI.

Intel Enters the AI PC Era

There is a generational shift in AI, and in his keynote, Pat revealed the need for better, faster computing and new thought processes on how the next generation of PCs should be built. Intel aims to lead the way in this arena, and Pat announced the AI PC — a new PC experience powered by the latest Intel® Core™ Ultra processors (code-named Meteor Lake), also announced at the event.

“AI will fundamentally transform, reshape, and restructure the PC experience — unleashing personal productivity and creativity through the power of the cloud and PC working together,” Gelsinger said. “We are ushering in a new age of the AI PC.”

The Intel® Core™ Ultra processors represent the first time that Intel integrates its neural processing unit (NPU) for power-efficient AI acceleration, with a more powerful Intel® ARC™ GPU for the Edge, and an all-purpose CPU. All this adds up to a package that is great for high-throughput AI workloads and low-latency inference, with software and hardware coming together for the job at hand.

For AI developers, this means new opportunities for AI experiences, local AI inferencing, and power-efficient AI applications, all within their laptops or desktops under one tool. And with the use of OpenVINO, they can optimize and deploy AI models across a wide range of hardware platforms.

Developers Get AI Development Boosts

There were also several announcements at the event aimed at making it easier to build, test, and deploy AI applications across Intel CPUs, GPUs, and AI accelerators. With the general availability of the Intel® Developer Cloud, developers can get access to a free version that allows them to explore and evaluate the latest Intel AI technologies and develop their AI skills.

Also announced was a hybrid AI SDK that provides a toolbox for model and application development. This will enable AI developers to take advantage of the best hardware, depending on the tasks — providing better performance and efficiency. Key features include a model optimizer, a runtime for deploying AI models, and a low-code environment for easily developing AI apps. The SDK is expected to be released in early 2024.

And the latest release of OpenVINO 2023.1 was unveiled at the event (followed by 2023.2 released in November) with new and powerful generative AI capabilities. As the trend for generative AI increases, the latest release will allow developers to run new models on their desktops and laptops locally so they can experiment with new features before integrating them into their applications. Since generative AI applications can be particularly memory demanding, OpenVINO 2023.2 comes with new stack optimizations to improve memory and execution time.

A top feature of the release includes quantization and weights compression for large language models. This enables developers to run large language models such as Llama2–7B on laptops with less than 16GB of RAM in INT4 format. Furthermore, OpenVINO now enables direct PyTorch conversion features to improve compatibility and model optimizations, plus unified tools so developers no longer must install separate packages for runtime and development tools. Also in this release is the introduction of the OpenVINO Model Conversion tool, which replaces the previous Model Optimizer tool for offline model conversion tasks.

The Intel AI Tech Evangelist team was on the show floor to demonstrate these new capabilities and showcase exactly how OpenVINO is being used out in the real world.

For example, Paula Ramos showed how developers can use OpenVINO in an industrial use case, simulating a production line and performing defect defection. To create their model fast with high accuracy, the team used the Intel® Geti™ SDK, which helps developers leverage OpenVINO for rapid AI model development and deployment.

“It’s not just that we have this in general availability. It’s also that Geti and OpenVINO are working together so we can create these models faster with the platform and we can deploy the models on OpenVINO,” Paula explained.

Other team members like Adrian Boguszewski and Anisha Udayakumar showcased how, with the use of OpenVINO and YOLOv8, developers can create applications that do things like people-counting, which is useful in intelligent queue management and other retail applications.

“OpenVINO means faster inference and faster inference means less power consumption, less carbon footprint, and a better environment,” said Adrian.

Raymond Lo and Ria Cheruvu demonstrated how OpenVINO’s new capabilities provide better memory and optimization of Gen AI models such as Stable Diffusions, and Llama2 that can easily run on laptop CPUs and GPUs. “We’re seeing a lot of enthusiasm and passion to get started on projects — to be able to write once and deploy anywhere,” Ria said.

AI Development Is Just Getting Started

It’s an exciting time to be an AI developer, but it’s only beginning to ramp up and the field is changing every day. See how developers can jump-start their AI career with Intel’s Edge AI Reference Kits that walk through use cases like defect detection, smart meter reading, and intelligent queue management. The OpenVINO team is constantly adding new use cases, step-by-step tutorials, and technical how-to’s that help developers build and improve their custom AI inference apps across industries.

Developers can also play around with the latest OpenVINO release and provide feedback on what additional features they’d like to see, or any bug fixes they may come across!

Notices & Disclaimers

Intel technologies may require enabled hardware, software, or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

--

--

OpenVINO™ toolkit
OpenVINO-toolkit

Deploy high-performance deep learning productively from edge to cloud with the OpenVINO™ toolkit.