Sik-Ho TsangBrief Review — More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using SparsitySLaK, with up to 61x61 convolutionsOct 1Oct 1
Satishkumar MoparthiWhy L1 norm creates Sparsity compared with L2 normDistance is calculated between points and Norm calculated between vectorsJan 20, 20212Jan 20, 20212
Suvasis MukherjeeDo we need GPU?Neural networks can significantly reduce the number of parameters through pruning, which helps preserve accuracy despite large reductions…Sep 13Sep 13
Sik-Ho TsangBrief Review — More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using SparsitySLaK, with up to 61x61 convolutionsOct 1
Satishkumar MoparthiWhy L1 norm creates Sparsity compared with L2 normDistance is calculated between points and Norm calculated between vectorsJan 20, 20212
Suvasis MukherjeeDo we need GPU?Neural networks can significantly reduce the number of parameters through pruning, which helps preserve accuracy despite large reductions…Sep 13
Yanis ChaigneauPruning in neural networksImproving the computation performances using sparse matrix multiplicationsMay 6, 20221
InQuansightbyQuansightA Year in Review: Quansight’s Contributions to PyTorch in 2023 (& Early 2024)This blog was originally published on the Quansight Blog by Andrew James and Mario Lezcano.Sep 12
InIntel Analytics SoftwarebyIntel(R) Neural CompressorStructured Pruning for Transformer-Based ModelsA Few Lines of Code, A Satisfying Sparse Transformer ModelJan 9, 2023