Volume Annotation in webKnossos

Norman Rzepka
WEBKNOSSOS
Published in
3 min readFeb 7, 2019

--

webKnossos features advanced 3D voxel painting functionality which are typically used to generate ground truth data for Machine Learning-based segmentations of neurons or organelles.

This is the third post in a series of webKnossos-related posts. The series started with this introductory post about webKnossos and a deep dive into its fast skeleton annotation features. This post covers the features for volume reconstruction in 3D datasets.

When analyzing the neurons of a particular piece of tissue, scientists can gain more insights from volume reconstructions than from skeletons alone. Morphological analyses and automated synapse detections typically are based on volume reconstructions. In large-scale Connectomics, it is not feasible to generate these reconstructions manually for complete datasets. Each neuron would need to be accurately painted on every section. To scale volume reconstruction, Machine Learning approaches to automatically segment the neurons have become widely used.

However, these approaches still require manually annotated training data on a small subset of the dataset. With webKnossos, this ground truth data can be generated in a scalable and collaborative manner. Proof-reading and visualization features facilitate the iterative development of Machine Learning models and help to evaluate the result of automated segmentations.

Brushing and tracing

Inspired by the classic computer drawing applications, webKnossos supports brushing in order to create volume annotations. Users draw either with their mouse or pen tablets. The brush size is easily adjusted with the mouse wheel or pen buttons. When the user draws a closed shape in one stroke, the object is filled automatically. Similarly, the trace tool draws a precise auto-closing contour that is automatically filled upon release. To facilitate annotation in 3D, users can duplicate their current annotation to the next section and apply corrections if needed. In practice, this speeds up the process a lot.

Using the task management features of webKnossos, volume annotation work within bounding boxes can be distributed to annotators, monitored and collected after completion. Manually painting voxels is still tedious, but feasible for small bounding boxes.

Brush, trace and copy tools

Merge over-segmentations

Contemporary Machine Learning segmentation approaches yield over-segmentations. This means that segments are actually split into several fragments. Our experience has shown that these splits are easier to correct than potential merge errors. In order to resolve the split errors, webKnossos has a merge tool. Skeletons are used to connect fragments into correct segments. The merge tool is an efficient way to get rid of split errors. Again, this work can be distributed with the help of many annotators.

Merge tool

View beautiful 3D isosurfaces

After successfully painting all neurons manually or with the help of automated segmentation, webKnossos turns into a handy viewer that visualizes 3D isosurface renderings of the processes. webKnossos supports on-demand rendering of entire reconstructed neurons as well as visualization of precomputed meshes.

On-demand isosurface rendering

If you want to experiment with the volume annotation features, you can sign up to an account on demo.webknossos.org right now and start annotating some of the published datasets.

In one of the next posts, we will describe the process of generating automated reconstructions from raw data in more detail. Please get in touch, if you acquired a large dataset and want to do automated reconstructions.

--

--