Welcome to Limitless AI

Bryan House
Deep Sparse
Published in
3 min readJun 14, 2019
Photo Credit: Nathaniel Foong (via Unsplash)

Throughout history, there have been two ways of solving problems: work within limits, or find ways to overcome them.

Today, we seem stuck in the “work within limits” phase of AI.

While plenty of exciting work is being done in the field of deep learning, our ability to address real-world problems is still constrained by the tools in our current toolbox. GPUs, TPUs and specialized, domain-specific architectures for hardware acceleration have promised a boost for deep learning applications. But as Bill Dally, Nvidia’s Chief Scientist and head of research was recently quoted saying: deep learning remains “completely gated by hardware.” We all want big models, big data, and big precision. But can our current dedicated AI hardware actually meet these needs?

Today: The Incredible, Shrinking AI Model

If you’re a data science team at a company or researcher at a university, you may have shared access to a GPU or TPU cluster. You may have experienced the pain of having to wait in line to use these hardware accelerators. What’s more, if you’re training a convolutional neural network (CNN), you probably have had to significantly shrink your image size or the overall size of the dataset to fit within the memory limitations of a GPU.

While such limitations are generally accepted, they can require sacrifices in accuracy. Consider this: when working with large medical image datasets, data scientists often need to change their image resolution, or restrict their batch or model sizes in order to fit their model and data sets into a GPU for training. Given the myriad challenges inherent when working with medical image data, including the small incidence of pathological findings, the required down-/up- sampling, or restricted input batch size can result in critical loss of accuracy. This tradeoff could mean the difference between discovering a cancerous cell or missing it altogether.

But the machine learning market is still in its earliest days. Working within narrow hardware limits simply isn’t sustainable as data scientists across the globe try to solve increasingly larger and more complex problems with AI. Whether we’re training a deep learning model for medical image analysis, or making decisions driving an autonomous vehicle, there’s a lot of fresh ground to cover in the next few years if we want to make meaningful progress.

Made to Fit vs. Made for Discovery

What if we could eliminate the constraints of our current machine learning tools and focus on the big questions yet to be asked? This Medium publication is all about challenging the limits of AI — and the limits of our “what’s possible.”

Our editorial mission is to tell the stories of researchers and data scientists who are pushing the boundaries of AI. We’ll examine the current limitations of machine learning, and novel approaches to achieving increased speed and accuracy in neural networks . And we’ll share our own experiences (and where it’s relevant, code you can use) as researchers and practitioners in the fields of mathematics and deep learning.

It’s time to break out of the “work within limits” phase of AI and challenge why these limits exist in the first place. Are you ready for models that are made for discovery, rather than made to fit in GPU memory? Are you ready to ask yourself the question: What kind of problems do I want to solve?

Follow along with our journey toward limitless AI, and find us @Neuralmagic on Twitter and Instagram.

….

Neural Magic is powering bigger inputs, bigger models, and better predictions. The company’s software lets machine learning teams run deep learning models at GPU speeds or better on commodity CPU hardware, at a fraction of the cost. To learn more, visit www.neuralmagic.com.

--

--

Bryan House
Deep Sparse

Startups, baseball, my family and Ozzie the rescue dog. Chief Commercial Officer, Neural Magic