Deci AIHow to Reduce the Performance Gap Between GPU and CPU for Deep Learning ModelsIs there a way to optimize inference for CPU performance to achieve GPU-like performance on deep learning models?May 25, 2022May 25, 2022
Deci AIMLPerf: How Deci and Intel Achieved up to 16.8x Throughput Increase and +1.74% Accuracy ImprovementThis marks another significant milestone in the ongoing Deci-Intel collaboration towards enabling deep learning inference on CPUsMay 8, 2022May 8, 2022
Deci AIMLPerf: Intel and Deci Boost NLP Models — Reaching Faster and More Accurate Inference PerformanceCompared to the 8-bit BERT Large model, Deci’s own DeciBERT models accelerated throughput performance by 5X and accuracy by +1.03%May 8, 2022May 8, 2022
Deci AI10-Minute Tutorial: How to Convert a PyTorch Model to TensorRT™It’s simple and you don’t need any prior knowledgeApr 12, 2022Apr 12, 2022
Deci AI4 Parameters to Consider When Choosing Hardware for Deep Learning InferenceFinding the right hardware for your model inference can be a daunting task, here’s how to simplify itMar 25, 2022Mar 25, 2022
Deci AIInference in Production: 5 Factors that Impact It & the Hardware Usage Metrics to TrackGo through the different components of the inference pipeline and ways to optimize eachFeb 28, 2022Feb 28, 2022
Deci AIA Guide to Common Object Detection Algorithms and ImplementationsA look into the different techniques available for object detection and how the field has matured through recent historyAug 21, 20211Aug 21, 20211
Deci AITutorial: Converting a PyTorch Model to ONNX FormatAlso, find out how to reduce your model’s latency and increase its throughput, while maintaining its original accuracyAug 8, 20211Aug 8, 20211
Deci AIHow to Boost a YOLOv5 Model’s Throughput and Latency by 2X in 15 MinutesUsing the YOLOv5 as an example, learn how you can optimize machine learning models with the Deci platformJul 17, 20211Jul 17, 20211