Raymond Lo, PhDinOpenVINO-toolkitHow to Build Faster GenAI Apps with Fewer Lines of Code using OpenVINO™ GenAI APIAuthors: Raymond Lo, Dmitriy Pastushenkov, Zhuo WuJul 9
Raymond Lo, PhDinOpenVINO-toolkitHow to run and develop your AI app on Intel NPU (Intel AI Boost)Jan 5
Adrian BoguszewskiinOpenVINO-toolkitHow to run OpenVINO™ on a Linux AI PCBenefit from CPU, GPU, and NPUJul 82Jul 82
Luís CondadosinLatinXinAIUnleashing Depth Anything v2: SOTA Monocular Depth Estimation on Intel CPU with OpenVINO and NNCFIn this article, we’ll dive into the latest advancements in monocular depth estimation, focusing on the state-of-the-art Depth Anything V2…Jun 16Jun 16
OpenVINO™ toolkitinOpenVINO-toolkitWhy and How to Use OpenVINO™ Toolkit to Deploy Faster, Smaller LLMsWith slim deployment packages, powerful AI performance, and official Intel support, OpenVINO is ideal for running your LLM applications.Jul 2Jul 2
Raymond Lo, PhDinOpenVINO-toolkitHow to Build Faster GenAI Apps with Fewer Lines of Code using OpenVINO™ GenAI APIAuthors: Raymond Lo, Dmitriy Pastushenkov, Zhuo WuJul 9
Raymond Lo, PhDinOpenVINO-toolkitHow to run and develop your AI app on Intel NPU (Intel AI Boost)Jan 5
Adrian BoguszewskiinOpenVINO-toolkitHow to run OpenVINO™ on a Linux AI PCBenefit from CPU, GPU, and NPUJul 82
Luís CondadosinLatinXinAIUnleashing Depth Anything v2: SOTA Monocular Depth Estimation on Intel CPU with OpenVINO and NNCFIn this article, we’ll dive into the latest advancements in monocular depth estimation, focusing on the state-of-the-art Depth Anything V2…Jun 16
OpenVINO™ toolkitinOpenVINO-toolkitWhy and How to Use OpenVINO™ Toolkit to Deploy Faster, Smaller LLMsWith slim deployment packages, powerful AI performance, and official Intel support, OpenVINO is ideal for running your LLM applications.Jul 2
Raymond Lo, PhDinOpenVINO-toolkitHow to run Stable Diffusion on Intel GPUs with OpenVINONote: It will also work on CPUs too! :)Feb 15, 20231
OpenVINO™ toolkitinOpenVINO-toolkitReduce LLM Footprint with OpenVINO™ Toolkit Weight CompressionCreate lean LLMs using weight compression with the OpenVINO™ toolkit. Reduce LLM size, memory footprint, and GPU requirements.Jul 2