Applying New Paradigms in Human-Computer Interaction to Health Informatics
Nowadays, data acquisition and infrastructure for data processing and predictive capabilities are becoming cheaper and faster for small & low resource organizations. As a result, researchers turn, at an ever-increasing rate, to Machine Learning (ML) and Data Science (DS) to implement better Human-Computer Interaction (HCI) in Health Informatics (HI). In this blog post, we will give you an overview of how researchers are leveraging data, technologies and (cheap & fast) infrastructure to improve the User eXperience (UX) and the decision making quality in HI and what new paradigms in HCI are setting the narrative for future research.
This article explores the new paradigms of HCI as a discipline on the development of User Interfaces (UIs) regarding HI, the current and future challenges for researchers, and the various trends, stating its innovations and developments. The implementation of those UIs establishes a direct interaction between clinical information availability and medical process accessibility. Thus, a need for workflow optimization changes the way in which we obtain the information, helping the clinical decision-making process. With the emergence of new information datasets and communication technologies, new opportunities have been created in the design and development of UIs by the application of HCI discipline. As an interdisciplinary field, the HCI theoretical roots enclose many several subjects that are also healthcare supportive. These include computing, ergonomics, cognitive sciences, and psychology. Applying HCI to the clinical domain allows less costly solutions but still presents theoretical and practical challenges. This opportunity has led to an impact in the society due to a change in the clinician’s user profile and the use of new interaction techniques. As suggested in the previous paragraphs, HI will also have an impact on HCI by opening new opportunities for applying HCI.
Healthcare (HI) solutions are giving HCI enough space to extend its field of study. There are many ways of applying either standard and non-standard techniques to the existing new paradigms. For instance, one of my research papers [1] related to Medical Imaging via touch-based surfaces, is highly significant and novel, but still operates within fewer novel techniques, the touch-based, and mobile-based device surfaces. The healthcare infrastructure is now ready to support systems by feeding them with their information. That said, we are starting to see orders growing together and allowing Machine Learning (ML) techniques to be applied. These techniques will determine a trend in which the output of the system is a recommendation for the clinician. Through ML the machine can predict the most severe cases to be prioritized for or even give pathology predictions to clinicians. This behavior, therefore, raises specific UI issues. For example, in that research work [1], we have considered how to visualize DICOM images of the patients. Also, this work, enables clinicians to act on such data by using touch devices. It requires a more profound knowledge regarding the Design Thinking of the whole system. All of the medical information is being often imperceptibly gathered and integrated by different operations on the overall healthcare (HI) infrastructure.
Health systems UIs are a vital part of ubiquitous computing that takes an HCI advantage. Creating such UIs is difficult in the clinical domain because of the medical image processing [2] and dynamic healthcare knowledge required for creating classifiers. Interactive Machine Learning (iML) [3] can allow clinicians to view, classify and train machines by correcting the classifications.
Algorithms that can interact with agents and can optimize their learning behaviour through these interactions, where the agents can also be humans.
The quote above by Andreas Holzinger [4] defines iML. But first, we need to address the goal of Machine Learning (ML): the development of algorithms improving the prediction ability over time through learning. Most of the researchers are putting their efforts by focusing on Automatic Machine Learning (aML) which brings us a significant advantage as an automatic approach benefits from big data with many training sets. This approach is also a disadvantage, since in the health domain; we are in the presence of a little number of events or available people to generate those datasets, where approaches like aML suffer from insufficient training samples. Here, the author brings a solution to this. The iML is, therefore, a viable approach for HI, having its roots in Active Learning (AL) [7] and Reinforcement Learning (RL) [8].
A more powerful learning process, one that can achieve higher accuracy with less information, can be obtained by combining AL with RL. A system with this approach may pose queries to be annotated by a radiologist. AL lends itself to our purpose as non-annotated medical images may not be abundant or difficult to obtain since radiologists annotations are expensive and time-consuming. The authors [4, 9] strongly recommend applying ML within HI. This procedure requires a concerted effort of researchers from different areas. Thereby, combining human cognition with iML approaches can be of particular interest to solve problems in HI, although there is a substantial lack of datasets.
By using a Human-In-The-Loop (HITL) approach, several difficulties could be mitigated while combining both human expertise with computer efficiency. The tests indicate that humans make significantly better decisions [10, 11] when they lean on explanations that are extracted by these techniques. Therefore, the work done by these authors is essential to us. It is also an introduction to the HITL topic. For instance, clinicians are great at looking at complex medical images and selecting lesions. Therefore, a machine needs to understand what a lesion is. Different lesions and patients can give machine knowledge through the learning process. By teaching the machine to see those various lesions and patients (i.e. machine intelligence), it will improve machine algorithms in an incredibly quick and accurate way, thanks to a robust dataset (i.e. human intelligence) of the labeled (annotated) images.
The research domain is extraordinarily complex, while the focus is on HI. For that reason, it is essential to do qualitative work, not only quantitative, taking into account the clinical environment by studying the clinical teams and patients, observing them and, eventually, becoming one of them (or at least try). It is a User-Centered participatory process [5]. This approach, demands interviews and User Research (UR) by the use of artifacts [6], like video, designing workshops and focus groups. Therefore, it will complement the quantitative analysis, with answers regarding how is the clinicians work and patients care. Mixing both approaches, the qualitative and quantitative will feed our research with stronger data from our users (clinicians or patients) on a sensible domain field.
As far as we assess the clinical field of the domain, a multidisciplinary, hierarchical role-based coordination is acknowledged. We have surgeons, emergency physicians, nurses, technicians and so on. A lot of different specialists depend on the information. The hierarchy is fundamental for task delegation as other people are executing the tasks. There is even a distributed role for specific positions to allow division labor over the health infrastructure, bridging coordination work regarding the information. For instance, several clinical departments have a doctor leader, that coordinates care and assigns tasks using Information Systems (IS) to support communication. Typically, nurses are also setting up equipment and reporting states to an IS. Then a scribe will document the clinical events, recording the workflow. The health infrastructure transforms into a huge lab for HCI researchers to understand their users. That said, by observation, we have opportunities to bring here technological innovation and enough space for improvements. For improving their interactions. For enhancing HI supported by HCI.
Acknowledgments
This article is supported by the case studies of the MIMBCD-UI and MIDA projects. The two projects are strongly sponsored by FCT, a Portuguese public agency that promotes science, technology, and innovation, in all scientific domains. The genesis of this article was a research work between ISR-Lisboa and INESC-ID, both associated laboratories of IST at ULisboa. Not forgetting the collaboration of M-ITI that pairwise with ISR-Lisboa are two research laboratories of LARSyS. From these institutions, I would like to convey a special thanks to Professor Jacinto Nascimento and Professor Daniel Gonçalves for advisor me during my research work. Other special thanks to Professor Alfredo Ferreira, for co-authoring with several research papers. I also would like to thank Professor Joaquim Jorge who was one of my research career precursors. Last but not least, I would like to thank several important people of this noble organization called oppr. A special thanks to João Campos, Bruno Dias, and Rodrigo Lourenço for review this article giving me great inputs.
Supporters
Our organization is a non-profit organization. However, we have many expenses across our activity. From infrastructure to service expenses, we need some money, as well as help, to support our team and projects. For the expenses, we created several channels that will mitigate this problem. First of all, you can support us by being one of our Patreons. Second, you can support us on Open Collective page. Thirdly, you can buy one coffee (or more) for us. Fourth, you can also support us on our Liberapay page. Last but not least, you can directly support us on PayPal. On the other hand, we also need help in the development of our projects. Therefore, if you have the knowledge we welcome you to support our projects. Just follow our channels and repositories.
References
[1] Calisto, Francisco M., et al. “Towards Touch-Based Medical Image Diagnosis Annotation.” Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. ACM, 2017.
[2] McInerney, Tim, and Demetri Terzopoulos. “Deformable models in medical image analysis: a survey.” Medical image analysis 1.2 (1996): 91–108.
[3] Fails, Jerry Alan, and Dan R. Olsen Jr. “Interactive machine learning.” Proceedings of the 8th international conference on Intelligent user interfaces. ACM, 2003.
[4] Andreas Holzinger. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2):119–131, 2016.
[5] Brunner, Julian, et al. “User-centered design to improve clinical decision support in primary care.” International journal of medical informatics 104 (2017): 56–64.
[6] Gray, Kathleen, and Cecily Gilbert. “Digital Health Research Methods and Tools: Suggestions and Selected Resources for Researchers.” Advances in Biomedical Informatics. Springer, Cham, 2018. 5–34.
[7] Settles, Burr. “From theories to queries: Active learning in practice.” Active Learning and Experimental Design workshop In conjunction with AISTATS 2010. 2011.
[8] Busoniu, Lucian, Robert Babuska, and Bart De Schutter. “A comprehensive survey of multiagent reinforcement learning.” IEEE Trans. Systems, Man, and Cybernetics, Part C 38.2 (2008): 156–172.
[9] Holzinger, Andreas, et al. “Towards interactive Machine Learning (iML): applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach.” International Conference on Availability, Reliability, and Security. Springer, Cham, 2016.
[10] Schirner, Gunar, et al. “The future of human-in-the-loop cyber-physical systems.” Computer 46.1 (2013): 36–45.
[11] Oliveira, Eugénio. “Beneficial AI: the next battlefield.” Journal of Innovation Management 5.4 (2018): 6–17.