GenSynth Simplifies Parallel Performance Calculations

DarwinAI
DarwinAI
4 min readJan 22, 2021

--

A Case Study

Internally we test GenSynth on a wide variety of models and datasets, to ensure GenSynth generalizes to the custom models designed by our customers. Our team has logged thousands of hours of regression testing, and building examples quickly. We’ve trained and tested numerous models on a wide variety of the world’s most popular datasets such as MNIST, ImageNet, VOC, COCO, Kitti 2D/3D, CityScapes, CamVid, Google Speech Commands, and many more..

The Problem

Although some networks have an “accuracy” (or similar measure of performance) output, we find it more often the case that we need to write some Python code to measure the quality of predictions, especially for models performing more complex tasks such as object detection and image segmentation. For example, for object detection we might need to perform a fairly complex sorting and matching of bounding boxes in order to determine the mean average precision (mAP) of the model .

It is very difficult to represent these complex operations needed to perform performance tests with drop-downs and checkboxes, and so GenSynth provides a custom metrics interface to give users the flexibility to create their own tailored performance testing strategies directly in Python. These user-created “Performance Metric Entities” can be leveraged directly within GenSynth to conduct comprehensive performance tests fit for the task at hand.

Our first attempt at creating this custom metric interface was not only more complicated for users than it should be, but also does not support parallelism to speed up the tests . As such, our goal was to figure out how best to simplify and speed up high-volume performance testing in GenSynth.

The Solution

We set two main goals for our new custom metric interface:

  1. It had to be simple and easy to understand so that users can easily define their custom performance metrics.
  2. It had to be parallelizable across multiple worker machines to speed up the performance testing that is being conducted.

Our new custom metric interface is very simple for users to use. The user provides a Python class with 4 methods and these arguments:

  1. The constructor __init__(self, folder_name).
  2. An update(self, data, tensor_values) method.
  3. A get_worker_results(self) method.
  4. A reduce_all_worker_results(self, worker_results_list) method.

One instance of this class will be created on each worker whenever a validation or test epoch is started.

The update function is called for each batch of data; it is then passed values of output tensors (configured), and may also access any fields of the validation dataset. The user writes this function to evaluate one batch at a time and aggregate summary statistics within the instance of the class (e.g., self.stat).

The get_worker_results() function has a simple purpose: to return the aggregate results computed by the present worker. These results are automatically sent to the main worker.

The reduce function is called at the end of the epoch, only on the main worker, being passed a list of all of the items returned by get_worker_results(). For example, if you have 4 workers, the main worker’s reduce function will get passed a 4-item list, each list item being from a different worker. You write this function to further aggregate the aggregates, compute the desired metrics, and return them in a dictionary.

You edit your Python module in a window of our web-based platform, where you have Python auto-complete features, and quickly check your code with the Validate button.

If you want the full details, including a simple example, please see the section in our Data Preparation Guide.

The Result

Performance Metric Entities are quick and easy to create by following this 4-method template. The inputs and outputs to each method are consistent; therefore, the user can focus solely on creating code to compute their desired metrics. Along with the inherent simplicity, the speed of performance testing can now be greatly sped up through the parallelism enabled in our custom performance interfaces.

After a Performance Metric Entity is defined and validated, you can use it in a job, attaching it to the model and data.

Several complex examples can be found in tutorial resource packages.

GenSynth-Ready Data, Models, and Metrics

We provide some add-on packages of fully-functional, GenSynth-ready examples (both image datasets and popular models) for our users. Use them directly or modify them for your own data and applications.

Interested in trying GenSynth on your deep learning project? Connect with us.

DarwinAI, the explainable AI company, enables enterprises to build AI they can trust. DarwinAI’s solutions have been leveraged in a variety of enterprise contexts, including in advanced manufacturing and industrial automation. Within healthcare, DarwinAI’s technology resulted in the development of Covid-Net, an open source system to diagnose Covid-19 via chest x-rays.

To learn more, visit darwinai.com or follow @DarwinAI on Twitter.

If you liked this blog post, click the 👏 below so other people will see this on Medium. For more insights from our team, follow our publication and @DarwinAI. (Plus, subscribe to our letters if you’d like to hear from us!)

--

--

DarwinAI
DarwinAI

Published in DarwinAI

Insights from the team at DarwinAI, the explainable AI company.

DarwinAI
DarwinAI

Written by DarwinAI

For enterprises that are ready to adopt trustworthy AI at scale. Visit our publication at https://medium.com/darwinai, and our website at http://darwinai.com.