Key Insights From the Team Lead Behind OpenVINO™ Notebooks: Part 1

OpenVINO™ toolkit
OpenVINO-toolkit
Published in
7 min readFeb 5, 2024

Author: Paula Ramos— Intel AI Evangelist

OpenVINO™ undoubtedly has taken the AI community by storm with its ability to streamline deployment of deep-learning models on a wide variety of devices. As such, it plays a key role in bringing the power of AI to real use cases and industries. But as is often the case with such innovative tools, users are left with a number of questions and curiosities about its complexities and potential. This is where OpenVINO notebooks come in.

OpenVINO notebooks serve as a bridge between OpenVINO’s profound capabilities and the daily needs of AI developers, providing an interactive platform to understand, experiment, and harness the toolkit’s potential. Whether you’re a novice looking to dip your toes into the OpenVINO ecosystem or a seasoned professional seeking to streamline your deployment process, there’s a notebook for you.

But don’t just take our word for it. To learn about how developers can kick-start or advance their AI journeys with OpenVINO notebooks, our AI Evangelist Paula Ramos had the opportunity to talk with several OpenVINO notebook team members to discuss everything from the inspiration behind the notebooks to their evolving roadmap and the challenges faced during development.

In this first conversation, Paula spoke with Andrei Kochin, Manager of the OpenVINO Notebooks team. Here are some snippets from their insightful chat:

Paula Ramos: Hi, Andrei. What can you tell us about yourself and how you got started with AI?

Andrei Kochin: My area of responsibility covers the model conversion stage of OpenVINO and OpenVINO notebooks, which is where the journey begins for the developers who are interested in inference optimization on Intel® hardware.

My personal AI path began on a previous work assignment that was not my primary area of responsibility. It was several pilot projects based on classic machine learning solutions. One of the successful deployment cases included license plate recognition, which was used internally within our company for security systems in our parking. Gates were automatically opening based on the recognized license plate for the employees.

Paula Ramos: I can imagine machine learning resources at the time were very time-consuming in comparison to the resources we have now. What can you tell us about the new AI trends, algorithms, and even technologies you see today?

Andrei Kochin: It is becoming simpler for the deployment cases, and AI is becoming more and more powerful. Previously, we had to do most things by ourselves. Someone had to design a model that we could utilize and maybe fine-tune for specific use cases.

The power of the existing AI models really encourages you to try more and more projects with the help of AI. For example, for license plate recognition we have models available in our Open Model Zoo. We can utilize tools like YOLOv8 from the Ultralytics and OpenVINO notebooks to make it quite simple to get started.

Previously, we spent several months or more doing this machine learning from scratch. But now you can just try it by using our notebook. And it is good enough to run it on your laptop, which is of course now more powerful than several years ago.

Paula Ramos: I’m so excited to see that we have a place where developers can replicate work in an easy way. What was the motivation for the OpenVINO™ team to create OpenVINO notebooks?

Andrei Kochin: It’s not really a secret. Once you attract more users and interest in a product, it’s really valuable to provide new users with documentation and tutorials that help them deploy your application. I can’t imagine better documentation than interactive tutorials, which in fact are almost ready-to-use demo applications. You can use them in your deployment scenarios by customizing or changing the model, and in some use cases, fine-tuning them for your unique scenarios.

We automatically run the notebooks on a regular basis to make sure that our tutorials are up to date. We continue to work to simplify the notebooks themselves, making each one more standalone. Progress in this direction means having more and more notebooks suitable for running in Google Colab and other infrastructures, allowing you to download a single notebook and try it out.

While some notebooks in the repository require the full repository to be downloaded to your system, we are working toward just providing the notebook code itself. That is why it’s called a notebook, so you only need one notebook that is enough to get started.

Paula Ramos: You mentioned Google Colab and standalone demos. How close are we to not needing to install a specific environment and being able to utilize enterprise cloud systems to run OpenVINO notebooks?

Andrei Kochin: There are still some dependencies — at least the dependency to install OpenVINO. But we are thinking about simplifying the list of the dependencies so at some point it would only require OpenVINO to be installed and the model to try. However, everything we are using to simplify and enhance the user experience — like Gradio, for example, which provides some simple buttons and text fields to input prompts for model generation, images, or sounds — also requires some external dependencies.

If you are just creating a simple application, it is possible to limit the dependencies to just OpenVINO and the operational environment. But, for demonstration use cases, some external library components will still be necessary.

Paula Ramos: Earlier you mentioned Ultralytics with YOLOv8, and I also see Hugging Face with Optimum Inference with OpenVINO. How does working with partners like this improve community engagement and developer experience?

Andrei Kochin: For Hugging Face, we have a joint staff named ‘optimum-intel,’ which actually provides a better pipeline for getting the power of OpenVINO. Optimum Intel will help you to get models optimized and, in some cases, quantized to INT8 precision. You can deploy it to a machine with limited RAM size, and the model size itself becomes more manageable. Effectively addressing these new models benefits users familiar with them, and helps new developers utilize OpenVINO on Intel hardware by optimizing inference through these notebooks.

We’ll also contribute original notebooks. For example, when Ultralytics published YOLOv8, we added a notebook on how to use YOLOv8 with OpenVINO. It is still one of the most popular notebooks in our repository today.

Paula Ramos: I can see that you and your team are moving fast with YOLOv8 and other AI trends. How do you encourage your team to stay on the top of the latest AI trends and manage resources?

Andrei Kochin: I fully rely on my engineers to notify me about timely events and to point out if we have something trending. Also, I get inputs from the architecture team. We are constantly monitoring AI forums for discussions on AI topics. If something becomes hot, we address it immediately by trying it with OpenVINO. This not only brings awareness about OpenVINO to the AI community, but it also helps developers accelerate AI solutions on Intel hardware. Our goal is to respond to timely and trending events effectively.

Paula Ramos: What types of problems or challenges does your team face on a day-to-day basis?

Andrei Kochin: One of the challenges we encountered previously was addressing the issue of nightly builds and other resource-intensive notebooks that couldn’t be run during the pre-commit phase. We had planned this for our internal machines and now we are able to rely on GitHub machines as well as execute resource-consuming tasks, such as quantization, locally. This ensures the quality and the health of the notebooks are maintained.

Other challenges we face on daily basis involve real dependencies. For example, the introduction of new Transformers versions. With every new release, there is a possibility it may not be suitable for our notebooks. Our continuous integration system is set up to catch these issues. This allows us to address any issues in a timely manner to ensure it does not compromise any user experiences or functionalities.

Paula Ramos: Given the open-source nature of these projects and notebooks, how can developers contribute to repositories?

Andrei Kochin: We have the list of issues developers can look at, especially within the good first issue of our notebook repository. Developers who want to contribute can select models that are not currently covered by the notebooks and try them out. Additionally, we have a contribution guideline for developers to follow.

Other ways to contribute are through events like our Google Summer of Code, where we have received some valuable external contributions.

Paula Ramos: Thanks for the great conversation, Andrei. Do you have any final thoughts you want to leave developers with?

Andrei Kochin: The message is quite simple: If you have a use case, are in need in AI optimization, or if you enjoy getting AI experience in general, try OpenVINO today!

About Paula Ramos:

Paula is an AI Evangelist at Intel. She has been an AI enthusiast and has worked in the computer vision field since the early 2000s. Follow her on Medium for more insights.

Notices & Disclaimers

Intel technologies may require enabled hardware, software, or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

--

--

OpenVINO™ toolkit
OpenVINO-toolkit

Deploy high-performance deep learning productively from edge to cloud with the OpenVINO™ toolkit.