Using CornerstoneJS and Orthanc to Support Deep Learning Projects

Francisco Maria Calisto
oppr
Published in
10 min readJun 24, 2019

Implementing and improving tools to support research about the automated analysis of medical images. Our projects feature several datasets and tools that will be important for our Artificial Intelligence novel methods across Medical Imaging applicability.

Source: Gomoll Research + Design of the GE Healthcare User Interface (UI) consistency efforts post.

Introduction

Artificial Intelligence (AI) has the potential to alter many application domains fundamentally [5, 6, 7]. One prominent example is clinical radiology as radiomics. The literature hypothesizes that Deep Learning Algorithms (DLA) will profoundly affect the clinical workflow [8]. Indeed, on an early story titled as “Applying New Paradigms in Human-Computer Interaction to Health Informatics”, we already underline the new paradigms of Health Informatics (HI) and AI applicabilities.

In our projects, the final main goal is to develop an improved Clinical Decision Support System (CDSS) [9, 10], as well as several datasets and tools supporting the main goal. To do this, three principal projects were created: (1) MIMBCD-UI; (2) MIDA; and (3) BreastScreening. First of all, the Medical Imaging Multimodality Breast Cancer Diagnosis User Interface (MIMBCD-UI) project aims to develop several systems (prototypes), as Proofs-Of-Concept for breast cancer diagnosis. With this, we improve the final system User Interface (UI) and Information Visualization (InfoVis) with novel interaction techniques and visualizations. The second project, that we call Medical Imaging Diagnosis Assistant (MIDA), aims to develop an assistant that will help the automation of cancer diagnosis. Finally, the last project is called BreastScreening. This project aims to use Deep ConvNet models pre-trained from our datasets on medical image analysis applications, and the use of Deep ConvNet to analyze unregistered medical images.

In this story, we will describe our projects and developed tools which will allow several Deep Learning (DL) techniques regarding the Medical Imaging (MI) domain. We developed a tool that allows to retrieve all used medical images provided on a DICOM viewer and available from a standalone DICOM server. Our DICOM viewer prototype was developed thanks to the CornerstoneJS library [3, 4] and is highly based on their CornerstoneDemo version, developed by Chris Hafey and improved by us. On the other hand, for our projects, we need a lightweight and standalone solution to store our medical images. That said, we used the Orthanc servers [1, 2] providing us a RESTful API.

Medical Imaging Platforms

In our example, we are using the Orthanc server [1, 2] version for OS X, but you can use other versions. The result will be the same. But first, we will introduce and explain how does it work.

How does a DICOM server work?

A DICOM server, like Orthanc its built-in RESTful API, that can be used to drive it from external applications, for instance, the CornerstoneJS set of tools and originated platforms. The Orthanc uses several DICOM tags from the stored medical images in the JSON file format. The structures for the stored DICOM resources are fourfold identifiers: (1) Patient; (2) Study; (3) Series; and (4) Instance.

This diagram shows that a given patient benefits from a set of medical imaging studies. Each study is made from a set of series. Each series is, in turn, a set of instances. [DOI: 10.13140/RG.2.2.19014.32321]

The medical information encoded by a DICOM file is called a dataset and takes the form of an associative array. Each value can itself be a list of datasets, leading to a hierarchical data structure that is much like a JSON file. In the DICOM terminology, each key is called a DICOM tag. The list of the standard DICOM tags is normalized by an official dictionary. For improved readability, it is also common to name these DICOM tags (e.g., “PatientName” or “StudyDescription”). The standard associates each DICOM tag with a data type, that is known as its value representation.

How does a DICOM viewer work?

As said before, we are using a set of tools from the CornerstoneJS library. From those tools, we created the Cornerstone Prototype which is a platform that will be of chief importance to view and generate various sets of data regarding our Medical Imaging (MI) projects. The data, in the format of datasets, will be provided to our Deep Learning Algorithms (DLA). However, in this post, we will not focus on it for now.

CornerstoneJS is a lightweight JavaScript library for displaying medical images. The library is not restricted to a particular pixel container or transport mechanism and does not define an interaction paradigm. As such, it may be adapted to any image format using an extensible image loader class and can utilize present and future transport protocols and user interaction techniques. It is why we chose this library and created the Cornerstone Prototype as a Proof-Of-Concept tool to potential generate our datasets.

Image retrieval from the Orthanc Servers to the DICOM viewer. In this figure, the DICOM viewer is supported by both CornerstoneJS library and Cornerstone Prototype. Now, Radiologists can annotate each lesion providing this data to the Deep Learning Algorithms. [DOI: 10.13140/RG.2.2.27402.93126]

The present Cornerstone Prototype extends the native RESTful API of Orthanc with a reference implementation of the DICOM standard. The client searches for studies (cases) or series (digital slides) resources while providing query parameters to filter DICOM objects based on given attribute values. The server responds with resource representations for each matched object in JSON format according to the DICOM JSON model. The client retrieves image metadata resources from the server for each image (resolution level) belonging to the matched study or series through a request.

Sample diagram of the DICOM meta tag structures essential for retrieving data. Further information can be found here on our project repository. The CornerstoneJS link each tag to show the right image on the viewer. [DOI: 10.13140/RG.2.2.34113.81762]

As we can see in the above image, medical images come with lots of non-pixel-wise metadata, such as the pixel spacing of the image, the Patient ID, or the scan acquisition date. With DICOM file, this information is stored within the file header and can be read and parsed, as well as passed around the application. However, it is common to provide metadata independently from the transmission of pixel data from the server to the client since this can considerably improve performance. To handle these scenarios, our prototype provides infrastructure for the definition and usage of Metadata Providers. Metadata Providers are simply functions which take in an Image ID and specified metadata type and return the metadata itself. This metadata information will be important for our lesion annotations. Therefore, it will be part of the information consumed by the Deep Learning Algorithms (DLA).

Image Annotations

A key distinguishing feature of projects is its support for standardized formats for image annotations. Already referenced before, the image metadata also includes information about the image, such as the name of the imaging procedure and how or when the image was acquired. Our system supports controlled terminologies, enabling semantic interoperability. In the use case of cancer lesion annotation, the value of our projects is recording lesion identifiers, anatomic locations of lesions, lesion and study types. This semantic information is critical for automating the generation of tabular summaries of lesions. Also, it enables automating comparing the response assessment in patients according to different imaging biomarkers.

The viewer and annotation window. Images are displayed in the web viewer and the Radiologist records
image annotations in using drawing tools and an annotation window. [DOI: 10.13140/RG.2.2.13981.15846]

As the Radiologist makes annotations on images in the viewer, it creates JSON files. All annotations are stored in several JSON datasets. The annotation datasets are accessible via functions from our several tools. These are the queried to track lesions and summarize changes in the cancer treatment response.

Discussion

Many research studies that require viewing and annotating radiology images for making measurements of lesions or extracting radiomics features [11] from them could benefit from using our Deep Learning Algorithms (DLA) and techniques. The use of the CornerstoneJS library and Orthanc servers across our Cornerstone Prototype and projects facilitates the collection of annotations and measurements targeting lesions in compliance with imaging standards.

From the acquired annotations, the calculated feature values on the JSON files can be used in applications, such as training a DenseNet model [12] or another kind of Machine Learning models. At this stage, we just need: (i) medical images to train the models and for image segmentation; and (ii) lesion annotations provided by Radiologists (i.e., Human-In-The-Loop) so that the model can learn. By computing a variety of image biomarkers on cohorts of patients, we can accumulate a substantial amount of data. This data can permit studies comparing the effectiveness of different imaging biomarkers as indicators of cancer treatment response. In the end, we will enhance clinicians’ workflow, as well as improved diagnosis. Thus, providing patients with better healthcare.

Conclusions

For our projects (i.e., MIMBCD-UI, MIDA, and BreastScreening), we are trying to implement and improve a novel Clinical Decision Support System (CDSS) serving from several datasets and tools. Those datasets and tools are implemented in order to reflect our requirements and clinicians’ needs on the main goal. We also used external tools, such as CornerstoneJS library and Orthanc servers to promote the beginning of these projects. So that we can generate our datasets while providing them to the Deep ConvNet [14] models on medical image analysis applications.

On the next post, titled as “Medical Imaging Downloader for CornerstoneJS and Orthanc”, we will describe our implementation of a medical imaging downloader for both CornerstoneJS and Orthanc platforms. This tool implementation will retrieve and provide the stored DICOM images on an Orthanc server addressed on a CornerstoneJS based platform. Such a tool was important during the entire project. Therefore, we want to share it with all the community. For instance, we use these set of tools for the work done in proceedings of conference publications such as the “Towards Touch-Based Medical Image Diagnosis Annotation” paper at ISS’17.

Acknowledgments

This post is supported by the case studies of MIMBCD-UI, MIDA, and BreastScreening projects at IST from ULisboa. The three projects are strongly sponsored by FCT, a Portuguese public agency that promotes science, technology, and innovation, in all scientific domains. The BreastScreening project is an ARC Discovery Project (DP140102794) in collaboration with IST, UAdelaide, and UQueensland. The genesis of this post was a research work between ISR-Lisboa and ITI, both associated laboratories of LARSyS. From these institutions, I would like to convey a special thanks to Professor Jacinto C. Nascimento and Professor Nuno Nunes for advisor me during my research work. Last but not least, I would like to thank several important people of this noble organization called oppr. A special thanks to Gustavo Passos de Gouveia, Bruno Oliveira, João Campos, and Bruno Dias for reviewing this article giving me great inputs. Last but not least, a special thanks to Chris Hafey, the propelling person of CornerstoneJS, who also developed the cornerstoneDemo. Not forgetting the three supporters of the CornerstoneJS library, Aloïs Dreyfus, Danny Brown, and Erik Ziegler. We also would like to give a special thanks to Erik Ziegler who supports several issues during this path. In the end, a great thank to all Orthanc project team, but especially to Sébastien Jodogne.

Supporters

Our organization is a non-profit organization. However, we have many expenses across our activity. From infrastructure to service expenses, we need some money, as well as help, to support our team and projects. For the expenses, we created several channels that will mitigate this problem. First of all, you can support us by being one of our Patreons. Second, you can support us on Open Collective page. Thirdly, you can buy one coffee (or more) for us. Fourth, you can also support us on our Liberapay page. Last but not least, you can directly support us on PayPal. On the other hand, we also need help in the development of our projects. Therefore, if you have the knowledge we welcome you to support our projects. Just follow our channels and repositories.

References

[1] Jodogne, S., Bernard, C., Devillers, M., Lenaerts, E. and Coucke, P., 2013, April. Orthanc-A lightweight, restful DICOM server for healthcare and medical research. In 2013 IEEE 10th International Symposium on Biomedical Imaging (pp. 190–193). IEEE.

[2] Jodogne, S., 2018. The Orthanc Ecosystem for Medical Imaging. Journal of digital imaging, 31(3), pp.341–352.

[3] Hostetter, J., Khanna, N. and Mandell, J.C., 2018. Integration of a zero-footprint cloud-based picture archiving and communication system with customizable forms for radiology research and education. Academic Radiology, 25(6), pp.811–818.

[4] Sedghi, A., Hamidi, S., Mehrtash, A., Ziegler, E., Tempany, C., Pieper, S., Kapur, T. and Mousavi, P., 2019, March. Tesseract-medical imaging: an open-source browser-based platform for artificial intelligence deployment in medical imaging. In Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling (Vol. 10951, p. 109511R). International Society for Optics and Photonics.

[5] Calisto, F.M., Lencastre, H., Nunes, N.J. and Nascimento, J.C., Medical Imaging Diagnosis Assistant: AI-Assisted Radiomics Framework User Validation.

[6] Calisto, F.M., Lencastre, H., Nunes, N.J. and Nascimento, J.C., Medical Imaging Diagnosis Assistant: AI-Assisted Radiomics Framework User Validation.

[7] Calisto, F.M., Miraldo, P., Nunes, N. and Nascimento, J.C., BreastScreening: A Multimodality Diagnostic Assistant.

[8] Santiago, C., Nascimento, J.C. and Marques, J.S., 2013, July. Performance evaluation of point matching algorithms for left ventricle motion analysis in MRI. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 4398–4401). IEEE.

[9] Calisto, F.M., Lencastre, H., Nunes, N.J. and Nascimento, J.C., BreastScreening: Towards Breast Cancer Clinical Decision Support Systems.

[10] Cai, C.J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G.S., Stumpe, M.C. and Terry, M., 2019, April. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 4). ACM.

[11] Lambin, P., Rios-Velazquez, E., Leijenaar, R., Carvalho, S., Van Stiphout, R.G., Granton, P., Zegers, C.M., Gillies, R., Boellard, R., Dekker, A. and Aerts, H.J., 2012. Radiomics: extracting more information from medical images using advanced feature analysis. European journal of cancer, 48(4), pp.441–446.

[12] Ting, D.S., Liu, Y., Burlina, P., Xu, X., Bressler, N.M. and Wong, T.Y., 2018. AI for medical imaging goes deep. Nature medicine, 24(5), p.539.

[13] Calisto, F.M., Ferreira, A., Nascimento, J.C. and Gonçalves, D., 2017, October. Towards Touch-Based Medical Image Diagnosis Annotation. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (pp. 390–395). ACM.

[14] Liao, Z. and Carneiro, G., 2017. A deep convolutional neural network module that promotes competition of multiple-size filters. Pattern Recognition, 71, pp.94–105.

--

--

Francisco Maria Calisto
oppr
Editor for

Human-Computer Interaction and Health Informatics enthusiast. Qualified and experienced at working as Research Fellow & Software Engineer.