DeeperAndCheaperVisual Language Model (VLM) Optimization — Activation-aware Weight Quantization (AWQ)Why VLM ?Sep 9Sep 9
DeeperAndCheaper[Yolov8/Jetson/Deepstream] Benchmark test — Orin Nano 4GB, 8GB, NX, TX2BackroundsAug 27Aug 27
DeeperAndCheaper[Quantization] YoloV8 QAT x2 Speed up on your Jetson Orin Nano #2 — How to achieve the best QAT…AbstractAug 271Aug 271
DeeperAndCheaper[Quantization] Achieve Accuracy Drop to Near Zero — YoloV8 QAT x2 Speed up on your Jetson Orin…Background KnowledgeAug 271Aug 271
DeeperAndCheaper[YoloV9][Model Optimization][Knowledge Distillation] #2 — How to implement Feature based KD ?Focal and Global Knowledge Distillation for Yolo V9 Object DetectorAug 27Aug 27
DeeperAndCheaper[YoloV9][Model Optimization][Knowledge Distillation] #1 — Why Knowledge Distillation for Object…Why knowledge distillation ?May 9May 9
DeeperAndCheaper[Quantization] Go Faster with ReLU! — YoloV8 QAT x2 Speed up on your Jetson Orin Nano #31. GoalOct 13, 2023Oct 13, 2023
DeeperAndCheaper[Quantization] YoloV8 QAT x2 Speed up on your Jetson Orin Nano #1 — Why Quantization ?BackgroundSep 28, 20231Sep 28, 20231