PYTORCH ECOSYSTEM

Introduction to torchaudio and Allegro Trains

Audio classification with torchaudio and Allegro Trains

Audio signals are all around us. As such, there is an increasing interest in audio classification for various scenarios, from fire alarm detection for hearing impaired people, through engine sound analysis for maintenance purposes, to baby monitoring. Though audio signals are temporal in nature, in many cases it is possible to leverage recent advancements in the field of image classification and use popular high performing convolutional neural networks for audio classification. In this blog post we will demonstrate such an example by using the popular method of converting the audio signal into the frequency domain.

This blog post is a…


CASE STUDIES

Training and Deployment Pipeline with PyTorch

Accurate object detection models analyze a security scanner’s output. Source: Neural Guard

Neural Guard produces automated threat detection solutions powered by AI for the security screening market. With the expansion of global trends like urbanization, aviation, mass transportation, and global trade, the associated security and commercial challenges have become ever more crucial. In this blog, we will talk about how researchers and developers at Neural Guard builds technology that detects specific, high-risk items in CT and X-Ray imagery by leveraging cutting-edge artificial intelligence algorithms to analyze a security scanner’s output.

The Challenge:

The team at Neural Guard faced the challenge of building, optimizing and maintaining deep learning (DL) models that recognize multiple unique objects…


DATA SCIENCE IN THE REAL WORLD

A Hero’s Journey to Deep Learning CodeBase Series — Part IIB

Written by Dan Malowany and Gal Hyams
Allegro AI Team

As the state-of-the-art models keep changing, one needs to effectively write a modular machine learning codebase to support and sustain R&D machine and deep learning efforts for years. In our first blog of this series, we demonstrated how to write a readable and maintainable code that trains a Torchvision MaskRCNN model, harnessing Ignite’s framework. In our second post (part IIA), we detailed the fundamental differences between single-shot and two-shot detectors and why the single-shot approach is in the sweet spot of the speed/accuracy trade-off. So it’s only natural that in…


DATA SCIENCE IN THE REAL WORLD

Machine Learning at your Den Table: Who’d Have Thought?

Written by Dan Malowany
Allegro AI Team

Teamwork and collaboration for data science teams

The COVID-19 crisis caught most of us by surprise. Like virtually every business, in every sector, our company was caught unawares as to suddenly move our entire data science team to their den tables, guest rooms and hastily assembled home offices. In the recent years as the Head of Deep Learning Research, I did have some experience with the challenge of working with a remote team, as I managed both local and remote data science researchers. I’ve tried out and adopted some core tools and best practices to make working remotely and managing…


DATA SCIENCE IN THE REAL WORLD

A Hero’s Journey to Deep Learning CodeBase Series — Part IIA

Written by Gal Hyams and Dan Malowany
Allegro AI Team

Deep neural networks for object detection tasks is a mature research field. That said, making the correct tradeoff between speed and accuracy when building a given model for a target use-case is an ongoing decision that teams need to address with every new implementation. Although many object detection models have been researched over the years, the single-shot approach is considered to be in the sweet spot of the speed vs. accuracy trade-off. In this post (part IIA), we explain the key differences between the single-shot (SSD) and two-shot approach. Since…


Data Science in the Real World

A Hero’s Journey to Deep Learning Codebase Series — Part I

Written by Dan Malowany and Gal Hyams
Allegro AI Team

We all aim to write a maintainable and modular codebase that supports the R&D process from research to production. Key to an efficient and successful deep learning project, this is not an easy feat. That is why we decided to write this blog series — to share our experience from numerous deep learning projects and demonstrate the way to achieve this goal using open source tools.

Our first post in this series is a tutorial on how to leverage the PyTorch ecosystem and Allegro Trains experiments manager to easily write…


Data Science in the Real World

Quantifying Diminishing Returns of Annotated Data

Written by Gal Hyams, Dan Malowany, Ariel Biller and Gregory Axler
Allegro.ai Team

Too Many Cooks in the Kitchen, John Cherry

Deep learning models are notorious for their endless appetite for training data. The process of acquiring high quality annotated data consumes many types of resources — mostly cash. The growing amounts of data as the machine learning projects progress, lead to other undesired consequences, such as slowing down all of R&D. Therefore, veteran project leaders always look at the overall performance gains brought upon by additional increments of their dataset. …

Dan Malowany

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store