Build your own TOMAAT inference service

TOMAAT.Cloud
4 min readJun 12, 2018

--

In the previous posts we have provided an overview of the goals and motivations behind TOMAAT and a tutorial on how to access a service through our 3D Slicer client. In this post, we would love to dig into something more technical and show you how to build your first service using the current version of our package. We will try to keep this post updated should our APIs change over time.

The deployment of most deep learning algorithms can be summarized as a combination of three steps:

  • Pre-processing: where we read data and prepare it to be processed
  • Inference: where we make the prediction using a framework
  • Post-processing: where we use and refine the results of inference

Of course, more complex pipelines can be built where these steps are repeated and interleaved in an arbitary fashion. Methods that are interactive may require a different, more complex, pipeline.

In addition to this, there must be an interface, build around this pipeline, which allows accessibility from the outside world. We do this through HTTP 1.1.

These components (and more) are readily available after a simple pip install tomaat.

Creating an App

In our paradigm we define “App” an object that realizes the pipeline mentioned above. An App has a __call__ method which is used to process data.

For example we can write:

from tomaat.server import TomaatAppmy_app = TomaatApp
(
preprocess_fun=pre_processing,
inference_fun=inference,
postprocess_fun=post_processing
)

Where pre_processing , inference and post_processing are callable objects. That is, python functions or classes with __call__ method implemented. We find that this allows enough flexibility at this point in time. Of course my_app is simply a callable that runs on data which is contained in a dictionary. The whole my_app object can be swapped for a user-defined callable if needed! In this sense TomaatApp is there only for user convenience and only to motivate people to structure their algorithms as a chain of pre_processing , inference and post_processing.

It is important to remark a little more that aTomaatApp or any object used in its place need to have a __call__ method that operates on data . The signature of this method is simply def __call__(data, **kwargs) . Data for us is always a dictionary. Dictionaries in python are very flexible and we believe there is value in doing things this way. For example pre- and post- processing methods can be callables having always the same signature. They take a data dictionary as input, manipulate it, potentially add new fields, and finally return it. The same goes for the inference function, that can be made very generic (Tomaat provides generic prediction objects for Tensorflow and PyTorch for example) by reasoning in terms of data dictionaries and input/ouput field names.

When the app finishes processing, it is expected to return a dictionary of results. The developer is free to choose how to structure this dictionary.

Creating a Service

Now we can make our app available by creating a “Service” from it. A service exposes the TOMAAT interface which, as explained in a previous post, provides two endpoints respectively used for prediction and interface discovery.

Every app needs different kind of data to run. A simple method might need only one medical image to run, but other complex approaches might require different images, thresholds, switch, fiducial points and even passwords to run! It is the job of the “Service” to tell clients about the needs of the app. Creating a completely custom and potentially very complex interface in TOMAAT is extremely simple. We allow users to define their interfaces using standard elements. These elements, as of today are:

  • Image:{'type': 'volume', 'destination': 'dict_field'}
  • Threshold: {'type': 'slider', 'destination': 'threshold' ...}
  • Checkbox: {'type': 'checkbox', 'destination': 'dict_field', ...}
  • Option: {'type': 'radiobutton', 'destination': 'dict_field', ...}
  • Fiducials: {'type': 'fiducials', 'destination': 'dict_field', ...}

The developer who wants to have TOMAAT expose a personalized interface for their service needs only to specify it as a list of these standard elements, as shown in the readme on GitHub.

As previously stated, our apps are called on data dictionary. The field 'destination' of these interface elements is used to populate the data dictionary. When we specify that we want, for example, an object of type 'volume' with 'destination': 'image' we can expect to find our image volume in the field 'image' of the data dictionary.

For what concerns the output of prediction, after post-processing by the app, it is necessary that the developer defines what the algorithm is supposed to return the client. Also in this case, this is done using standard elements such as:

  • Label volume: {'type': 'LabelVolume', 'field': 'images'}

These need to be organized in a list that can contain any combination of elements. More info can be found on the readme on GitHub.

Also in this case the field 'field' has a very specific function: it tells TOMAAT where to look for the data to return to the client in the dictionary that has been obtained from the app as a result of the inference.

At this point we can create our service:

from tomaat.server import TomaatService

my_service = TomaatService
(
config=config,
app=my_app,
input_interface=input_interface,
output_interface=output_interface
)

which can be run through my_service.run() .

Note that you need a config dictionary to run the service. A self explainatory example of this dictionary is given in the readme on GitHub. Example services can be found here.

In this post we have shown how to create a prediction service that can be deployed and made available over a network connection using Tomaat. To make services public and add them to our global service list you will need to obtain an API key. This process is not automatic yet, but you can obtain more info here.

--

--