SyncedReview
Published in

SyncedReview

Maryland U & Google Introduce LilNetX: Simultaneously Optimizing DNN Size, Cost, Structured Sparsity & Accuracy

The current conventional wisdom on deep neural networks (DNNs) is that, in most cases, simply scaling up a model’s parameters and adopting computationally intensive architectures will result in large performance improvements. Although this scaling strategy has proven successful in research labs…

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store