HoloDoc: Enabling Mixed Reality Workspaces that Harness Physical and Digital Content

Zhen Li
ACM CHI
Published in
6 min readApr 25, 2019

This article summarizes a paper by Zhen Li, Michelle Annett, Ken Hinckley, Karan Singh, and Daniel Wigdor. This paper will be presented at CHI 2019, Glasgow, on May 6th at 14:00–15:20, Room Alsh 1, in the Mixed Reality session.

A 30 sec preview of this work, subtitles available on YouTube

Motivation

Fusing the benefits of being physical and digital

Physical documents have many positive attributes, including:

  • They allow people to cross-reference faster and have a higher error-detection rate than reading on a screen [Takano et al., 2015];
  • Physical paper enables easier document revisitation, annotation and note-taking [Pearson et al., 2013];
  • People print out papers from various digital sources to read and annotate them and also to keep them in one, easily accessible place [Bondarenko and Janssen, 2005];
  • Laying out pages in space is important to get an overall sense of the structure of a document because laying out paper documents is more flexible and dynamic [O’Hara and Sellen, 1997].

On the other hand, digital documents also provide unique functionalities:

  • Multimedia content, such as animated figures, can be created to augment the paper reading experience [Grossman et al., 2015];
  • People find it easier to enter or edit information on a digital device rather than on the paper [Tashman and Edwards, 2011];
  • Digital content can be archived or shared conveniently [Guimbretière, 2003];
  • Searching on a computer is usually faster than searching through printed documents [Dillon, 1992].

HoloDoc: A Mixed Reality System

We propose combining the benefits of being physical and digital at the same time. The goals of our system, HoloDoc, are to:

(1) Provide visual feedback to people, without being limited by the physical dimension of devices, or by the specific area on the desk under a camera or projector;

(2) Enable mixed reality workspaces, especially with a head-mounted display might help to leverage the spatial memory in 3D space to retrieve tasks when working with digital content alongside with the physical documents;

(3) Support people in using digital pens and gestures in conjunction with paper.

The HoloDoc Architecture

An illustration of the HoloDoc architecture

As for the system architecture, we used a Neo Smartpen N2 to track the user’s pen strokes, and a Microsoft HoloLens (1st gen) for computing and presenting the mixed reality (MR) contents. HoloLens ARToolKit library was used to track the real-time posture of the tagged documents (~15 fps). Microsoft Azure APIs were used for a few features including OCR, handwriting recognition, and academic search.

HoloDoc Interactions

In general, the user can tap the pen on the paper, and navigate the associated digital contents in mixed reality. This combination was designed to allow the user perform direct manipulation over the digital contents, while also reducing the abundance of ink traces made on the paper.

For example, we can tap the pen on the title, and then select from the various pie-menu options using hand gestures, as shown in this video clip:

An Example of HoloDoc Interactions

Following this design scheme, as well as the reality-virtuality continuum [Milgram and Kishino, 1994], we proposed three novel interaction techniques in this work: Interactive Paper, Interactive Sticky Notes, and Interactive Whiteboard.

Interactive Paper

While physical paper can provide better support for reading activities due to its flexibility and tangibility, it lacks the ability to be indexed and fails to convey rich media.

To overcome this, with the Interactive Paper, the user can tap their pen and navigate the digital contents in mixed reality. To do so, a PDF file has an ARToolKit tag on the top-left corner, and Ncode dot-patterns in the background. Then, these documents can be printed with a regular office printer.

For example, when the user is interested in one of the references, instead of flipping back to the end of the document, they can simply tap the citation id on the paper. The corresponding metadata information will be available in the mixed reality view (check the following video).

Other options are available through the pie menu, including having a glance at the first page as a preview, or just opening the entire document and moving it around using hand gestures.

Interactive Sticky Notes

Other form factors, such as sticky notes, can also be augmented by mixed reality content. Here we present Interactive Sticky Notes, with which users can perform online searches or local searches using handwriting recognition.

In the first demo, the user writes down the keywords “mixed reality” and taps the “Online Search” button to start searching. Note that they can retrieve the result anytime using this physical tag, which becomes a physical proxy of the virtual window.

In the following demo, we show how to perform a local search in a similar workflow. As the user writes down the keyword “reading” and taps “Local Search”, all local documents with this keyword will be highlighted, and they can easily tell which document is most relevant to the keyword based on the size of the visual indicator and the text displaying how many times the word appears in it.

Interactive Whiteboard

The last interaction technique is the interactive whiteboard, which provides a large space for a user to organize their documents.

To achieve this in HoloDoc, the user can place a special tag on the wall and tap on it to create an empty whiteboard space. Then, when they find something interesting, they can crop the target paragraph or figure of the paper document so that the content will be transferred to the digital world.

This physical tag, again, works as a proxy of the virtual whiteboard. The user can tap on it to store it, bring the tag with them to a new place, and tap on it again to retrieve their whiteboard with all the cropped contents.

Summary

In this work, we conducted a document analytic design study to probe how users employ physical and digital documents when both are available. The study revealed interaction patterns when fusing these resources into a tightly coupled workflow that harnessed the benefits of both mediums.

Inspired by the takeaways from this formative study, the HoloDoc system was created, i.e., a new mixed reality system for academic reading tasks that leverages the benefits of digital technologies while preserving the advantages of using physical documents. HoloDoc contains several features that span the “reality to virtuality continuum’’: augmenting regular physical artifacts with dynamic functions, attaching virtual elements to physical proxies, and managing virtual content that was transferred from the physical world.

An evaluation of HoloDoc was conducted and demonstrated that users who were not familiar with mixed reality devices could easily understand HoloDoc interactions and were able to reflect on their personal experiences to improve HoloDoc. Design implications and challenges that need to be addressed for future work were also discussed.

Details of these studies, and more interaction techniques can be found in our paper. Please come to my talk and let’s chat :)

Full citation, PDF link, and a longer video for this work:

Zhen Li, Michelle Annett, Ken Hinckley, Karan Singh, and Daniel Wigdor. 2019. HoloDoc: Enabling Mixed Reality Workspaces that Harness Physical and Digital Content. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland, UK. ACM, New York, NY, USA. 14 pages. https://doi.org/10.1145/ 3290605.3300917

[PDF]

Full video of HoloDoc (5 min version), subtitles available on YouTube

Acknowledgements

We thank Dr. Seongkook Heo and Mingming Fan for providing thoughtful feedback, and the members of DGP lab for their valuable suggestions.

--

--

No responses yet