Importing datasets into webKnossos

Norman Rzepka
WEBKNOSSOS
Published in
3 min readFeb 21, 2019

This is a tutorial on how to start using a TIFF stack with webKnossos. First, we convert the stack into the WKW format and then import the data to webKnossos.

Convert a TIFF stack with wkcuber

The webknossos-cuber (wkcuber) is a command-line tool that converts several formats into the efficient WKW format. It is a Python 3-based tool. We recommend to run it either with Anaconda or Docker. For the purpose of this tutorial we will install wkcuber using Miniconda3 on the command line:

# Create a new conda environment with Python 3.7
conda create -n wkcuber python=3.7
conda activate wkcuber
# Install wkcuber
pip install wkcuber

After the installation is complete, we can start converting the TIFF data:

# Go to the folder of the TIFF stack
cd /path/to/tiff/data
# Start the conversion
python -m wkcuber \
--name demo_dataset \
--layer_name color \
--scale 11,11,28 \
--dtype uint8 \
--max_mag 4 \
--verbose \
--jobs 4 \
. demo_dataset

Your dataset will be stored in a subfolder called demo_dataset and ready to import in webKnossos. But let’s first recap the arguments that we just used. First, we need to give the dataset a name. In this example, there is only one layer with color as the layer_name. If you have multiple layers, such as different channels or segmentations, you need to run wkcuber for each layer separately. Make sure to pick different layer names and set the same target folder. The scale is the size of a voxel in nanometers.

Since our source data are in 16-Bit, we need wkcuber to convert the data to 8-Bit for use in webKnossos. wkcuber will uniformly squash the bit depth. In this case, all values will be divided by 256. If you want a more sophisticated approach that takes the dynamic range into account, you should preprocess your data. As part of our commercial BioVision toolbox, there is a tool for automatically normalizing, stitching, aligning and registering large datasets efficiently. For smaller datasets, you can also use open-source tools such as ImageJ to produce 8-Bit TIFFs.

wkcuber will automatically create downsampled versions of your dataset, for webKnossos to quickly zoom in and out of your data. Since our tutorial dataset is fairly small, downsampling up to mag 4 is sufficient.

The option verbose enables progress outputs which are useful for converting larger datasets. The jobs argument specifies the number of utilized CPU cores (in this case 4). Finally, we have to supply the source and target folders. The source TIFF files are expected to be in one folder and have continuously numbered file names (e.g., demo_001.tif, demo_002.tif). All files in a folder will be scanned and sorted before conversion.

Load data into webKnossos

If your dataset is fairly small (say, less than a gigabyte) you can zip it and import into webKnossos via the user interface.

If your dataset is much larger, you should move the files directly in the filesystem to the binaryData folder.

# Move the converted data to webKnossos folder
mv demo_dataset /path/to/webknossos/binaryData/Your_Organization/

After the files are moved, click the dataset refresh button in webKnossos and see your new dataset popping up. Now, you can start creating annotations and sharing your data with collaborators.

Please contact us, should you be interested in uploading large datasets to webknossos.org or have any other questions.

Note: wkcuber currently only supports Linux and OS X. Windows support is on the development roadmap.

--

--