Sik-Ho TsangBrief Review — More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using SparsitySLaK, with up to 61x61 convolutionsOct 1Oct 1
İlyurek KılıçEmbracing Sparsity in Machine Learning: Enhancing Efficiency and PerformanceIn machine learning, the concept of sparsity plays a pivotal role in optimizing models for both performance and resource utilization. It…Oct 3, 2023Oct 3, 2023
Suvasis MukherjeeDo we need GPU?Neural networks can significantly reduce the number of parameters through pruning, which helps preserve accuracy despite large reductions…Sep 13Sep 13
Sik-Ho TsangBrief Review — More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using SparsitySLaK, with up to 61x61 convolutionsOct 1
İlyurek KılıçEmbracing Sparsity in Machine Learning: Enhancing Efficiency and PerformanceIn machine learning, the concept of sparsity plays a pivotal role in optimizing models for both performance and resource utilization. It…Oct 3, 2023
Suvasis MukherjeeDo we need GPU?Neural networks can significantly reduce the number of parameters through pruning, which helps preserve accuracy despite large reductions…Sep 13
Satishkumar MoparthiWhy L1 norm creates Sparsity compared with L2 normDistance is calculated between points and Norm calculated between vectorsJan 20, 20212
InQuansightbyQuansightA Year in Review: Quansight’s Contributions to PyTorch in 2023 (& Early 2024)This blog was originally published on the Quansight Blog by Andrew James and Mario Lezcano.Sep 12
Yanis ChaigneauPruning in neural networksImproving the computation performances using sparse matrix multiplicationsMay 6, 20221