E-Studio Project II Process Notes

Enriching the narrative experience in Phipps Conservatory

Jenny L
20 min readNov 19, 2019
class notes Nov.19th

Some key reminders form Daphane and Peter:

  • Do consider the overall storyline, but mainly focus on one single stop in the whole process! (consider: what value does AR add to the experience?)
  • create an immersive experience with immersive tools. AE is a good way to make a final presentation video, but not good for prototyping.
  • Have a balanced amount of physical and digital interactions. (consider direct and indirect interactions) (interaction→pattern→experience)

Who visits Phipps?

On the official facebook account, one major type of posts contains interesting botanical knowledge + beautiful photos of plants. Other posts are about events/ current special exhibit going on in Phipps. Compared to the previous posts, many of these are targeting younger children or teenagers (and their parents). The videos and photos are showing how welcoming the environment is, and how can kids enjoy the exhibits safely with their parents, and how can they learn from the exhibit.

←Events targeting at younger audiences → Botanical knowledge and photography targeting at older audiences

I also looked at what visitors went to Phipps post on social media. Visitors that went there with family members gain seem gain good experiences. The place commonly gets the comment ‘something for everyone’. This suggests me to enhance the personalized experience (or encourage communication between visitors in a group). Overall the age and occupation range of visitors is quite wide. However, goals are mostly narrow: adult visitors generally try to get a return to nature experience and, and younger audiences expect more fun and playful experience. There might be unintentional educational takeaways depend on the individual’s personal interest. He/she might pay attention to, hence remember a certain pattern on a leaf, a type of soil or some botanical knowledge…

↑ People’s reviews regarding family experience

Personas

The three fictional personas I developed for the project ↑

1st Visit to Phipps

Phipps is divided into various sections. I decided My story arch shall only cover the Cuba section of the conservatory (it is a permanent exhibit of Phipps). The experience shall be a stand-alone (self-guided) journey. However multiple visitors are encouraged to communicate throughout the AR journey. There are a couple of existing activities in these sections, many of them sit in small kiosks, which give affordances (provide enough space) for interactions in AR. Here is the kiosk is would like to further develop:

(Cuba section) Hut kiosk in Phipps — multiple activities included
some more photos (interior and exterior of the hub)…

The hut locates high in the room, which allows viewers to gain a nice view of the room. The kiosk is quite big — it can comfortably fit 5–6 people. The activities are mostly about plant/animal protection researches done in Cuba, rather than about individual kinds of plant. So I feel it is fair to argue the hut is quite central+summative to the Cuba section of the conservatory. Based on my personal experience and observation, I identified a few pain points of the current way that the activities are structured:

  • bad lighting condition make text hard to read (especially denser ones)
  • Some facilities are glued to the desk (e,g. jars visitors are supposed to smell). This makes the viewers with a different height gain distinct user experience (I have to bend over multiple times o complete the activity).
  • Plants grow into the hut- not sure if I should touch it or not? (occlusion happened: plants blocked the text so I have to move them in order to continue reading.)
  • The hut area is quite noisy since there are artificial waterfalls around the hut. But there are recordings/videos playing in the hut too. I had a hard time focusing on the audio.

1st Draft(s)

Initial 2D & 3D Storyboards

part of my initial 2D story bords: the process of pointing + zoom a plant’s model
initial iteration 1: pointing at a plant + zoom idea
initial iteration 2: lifecycle cycle slider idea
initial iteration 3: Map & physical pins idea

Feedback from Daphne & Peter

  • Pointing at plant + recognition would be hard to achieve, especially in the Cuba exhibition area, since there are lots of overlapping happening with the plants there. Plant recognition would be more practical in the orchid room since 1. flowers are easier for VR headsets to differ and 2. less overlapping is happening over there. However, the orchid room doesn't have an existing free space such an interaction.
  • Don't make the experience too digital. Currently, the only way viewers are gaining information is by looking at stuff, mainly digitally. I will look for ways to utilize more human senses in my design (smell, hear, touch…).
  • Think about: what’s the core value that AR/MR adds to the
    Phipps experience? How are these related to different personas?
Class notes Nov 21st

2nd draft

I tried to make the interaction less digital. I ordered to give a more tangible side to the interaction, I’ve decided to add some things the users could touch at the map table. But what is the thing that users shall touch is a question. Since these marks are ‘locators’ in nature(they mark physical locations),
I’m thinking about a push-pin shape or top-heavy forms that are comfortable to hold by hands (therefore viewers are invited to pick them up and play with them). However, the downside of this approach Is that all leaves on the map will have locators in the same shape to represent them. This is not favorable since it’s not effectively using another human sense to add to the Phipps experience.

more 2D storyboards

After a few rounds of brainstorming, I came up with the idea of using the leaves that naturally fall from trees as the physical locators. Visitors can feel the texture of the leaf; such experience of touching made the MR tour richer. The leaves will be attached to a stick/needle/wire, which helps them to stand up vertically on the map table. When viewers pick the leaves up at sea map table a physical model will appear on the viewer’s hand. The model in mixed reality will overlap with the physical leaf. I assume there will be some occlusion issues.

Moving forward, I will test different transparencies level by photoshopping interfaces into photos I took in Phipps to see if the overlapping is an issue. And I’ll check the flow of the experience by making physical storyboards.

Physical model & physical storyboard

Some process photos of making the physical model ↑

In order to make the interaction less digital, I started to build the hut physically out of wood and foam. Seeing a scaled-down version of the physical environment helped me making the interaction more tangible. As the video below showed, now the interaction utilizes human senses other than vision(the leaves they pick up from the table are real leaves that fall from trees in Phipps, hence they can feel the texture of the plant).

photos of the physical model
physical storyboard- the whole hut experience for a new visitor

Photoshoping over photos of Phipps

I tried to prototype with photoshopping MR interfaces into photos I took in Phipps. I turned some of them into gifs to show the interactions. Most hut-specific interactions are world-centered, meaning these windows/images/holograms only appear as viewers do sth in the physical environment. The view-centered functions are the ones visitors use most frequently during a tour. These icons/windows always stuck there no matter where do users look at and what do they do.

the slider interaction
The interface of the tutorial

I placed all view-centered icons in corners. The upper-right corner sits pin and map function; the lower-right corner is dedicated to the camera function. The lower-left corner has a live-chat function for users not familiar with AR to call Phipps employees for help in using their MR glasses. These are all view centered interfaces in my design. Regarding world centered interfaces, the picture above is showing the first of the six tutorial steps that new visitors will see if they enter the hut. There will be lines in MR highlighting all physical leaves on the map table, telling viewers these are the items that you can pick up and play with.

Zooming in and out the plant model in MR

Notice now the MR model is attached to the physical leaf model. Such occlusion prohibits users to see and feel the physical leaf. My current solution is letting visitors use the pin function (the view-centered icon locate at the up-right corner).

As the image below shows, now the user is using the pin function to view the model in MR. The visitor in the image is holding the physical Leaf model in her hands, and she has dragged the pin icon onto the hologram of the MR plant model. This way she has kept this function fixed in her field of vision even if she has returned the real leaf back to the map table. However, I feel this is an indirect solution for resolving the issue. I have considered making the interface more transparent, but the details on the model would be harder to see. If the interfaces are more oblique, safety will become a bigger concern. As environmental designers we should be making responsible designs, hence I would avoid factors that could lead to tripping or falling.

3rd person perspective — if we can see what a Phipps viewer is seeing

Many functions, for instants, zooming in and zooming out involve gesture control. Users that previously used VR or AR will be pretty comfortable with these gestures; they’ll be using these functions frequently. However, for people who are not familiar with advanced technologies, They may remember these gestures at the beginning of the experience. Towards the end of the tour, it is very likely that they have already forgotten some of them. This means that people lost access to some of the functions that MR glasses are capable of. Considering this I added a pop-up window at the very bottom of the field of vision (image below) to remind people gesture shortcuts. Tech-savvy people can always have this window closed, on the other hand, people who are not familiar with MR can have this window revealed when they can’t remember a certain gesture.

pop-up window: the gesture controls
map location vs physical location locator

The image above illustrates the two location indicators: One located on the physical map table (small yellow arrow), The other sits in the wider physical conservatory space(the Whole tree is highlighted). The small arrow indicates the location on the map that viewers should return the leaf model to. And the bigger highlighted plant invites the viewer to find the plant in Phipps (let viewers move in the physical space).

Some issues I identified by making the physical model and PS:

  • Too many interfaces floating around+ dense plants in Cuba section in Phipps might make the experience quite dizzy. In formation will be floating everywhere, and viewers won’t know where to see. Considering the environment this interface is going to be put in, I will definitely simplify the interface.
  • The occlusion issue is quite severe, especially when the plant model overlaps with the real leaf. The photoshopped images showed if the model overlapped the leaf completely, it is impossible for users to see the real leaf. As a result, users cannot have a visual connection with the texture that their hands are feeling when they are touching the leaf.
  • Related to the occlusion issue, when users zoom in and zoom out the model (when the scale of the real leaf and the model isn’t 1:1), The overlap situation will be quite confusing. (The leaf-model and the real leaf will partially overlap but not completely. )
  • Regarding the position locator, I tried a different solution in the video prototype and the Photoshop prototype. I used an arrow to indicate the position in the video but I think it will attract too much attention when people try to look at other contents. The transparent yellow tree shadow in the photos are slightly better. An issue with the transparent highlight is that it might occupy too much field of vision depends on where is a plant is relative to the viewer. If it is really far away, The highlight will be really small that’s hard to see in the first place. If it is really big, users will be distracted and unable to use any other functions unless they
    close this interface.
  • The slider interaction feels disconnected from the rest of the experience. Regarding the tangible side, Cubas map suggests the activity is about physical space, and the later part of the interaction that asks users to move around Phipps is again related to space. However, the slider part is related to the plants’ lifecycle. So although it is quite complete right now, I’m considering removing it from the experience.

Digital media self-reflection

Question: As the prevalence of digital media in our physical environments increases daily, what is the role and/or responsibility of designers in shaping our environments?

One special feature that makes environment design different from other designs is it’s’ particularity.’ The ideal design solution will be so different for each, and every situation depends on the goal and point of view, and sometimes, there Isn’t one best solution. However, digital media tends to reproduce very quickly: People re-posting on social media, using images from the web, etc. are only a few examples of how digital media can go over-abundant in a digital environment. When the media are moved from their original environment/ context and replanted into another, their meaning shifted, and the originally decreased, resulting in design solutions that will partially tackle a problem. Environmental designers shall avoid over/referencing and originality. Good designs shall arise from in-depth studies and researchers regarding the issues, rather than simply referencing existing environments.

Environment design is a field of study where digital media can be embedded in all aspects of our lives in multiple ways. In a video Daphne showed to the class in Thursday’s Lab, we see digital media on coffee mugs, table surfaces, classroom walls… Basically, all surfaces around us, no matter curves or flat. The information displayed on the objects are either reverent to the object(for example, the coffee mug would show the water it’s holding’s temperature rather than the room temperature, since the former one is more reverent) or can be transferred to other surfaces (portable information). When the inflammation flow is this complicated, and information amount is this big, it is environment designers’ responsibility to organize this information into the right layers, groups, and locations so that people can get access to them effortlessly. For instance, in AR, if we don’t adjust the proximity of text when tracking objects, users are going to be confused, which tag is attached to which physical object. This not only makes the application less usable but will only cause fatal issues, which all designs would want to avoid.

3rd draft

Developing the interaction…

Sketching out the whole experience to check the disclosure sequence

I see how the disclosure sequence is extremely important. Since in Mixed reality, users have to see the interfaces floating in the physical environment, The interfaces can’t be extremely complicated. And in this matter and said there are not multiple interfaces appearing together at once. If information is exposed to the audience in the wrong sequence, The clarity of communication will be. If the user is not familiar with advanced technology, he or she might gain a worse experience. Sketching out the experience in 1st person perspective step-by-step helped me figure out the bugs in my thinking.

Pin the wanted windows or Trash unwanted windows? In the last iteration, I have a view centered Pin icon at the top right corner The icon is for users to keep any World centered digital content they want to keep on the screen after they moved away from the location that triggers that specific content. However, After I tested The solution with a couple of my peers, I found the method is less informative than I sought. Now I’m taking a reversed approach: letting users throw away the interfaces that they no longer need. I will replace the Pin icon with a Trash icon. This function works similarly with the Trash function on laptops. When a user no longer need a certain window on their MR glass’ screen, they can drag or use som gesture to abandon them, thus removing the interface from their field of vision.

the Core value of MR in my design? The core aim is to make the research portion of the Cuba exhibition More attached to the rest of the exhibit. Without this mixed reality interaction, people cannot match the researches talked about in the hut to the plants they just saw in Phipps, and visitors cannot appreciate the results accomplished by these plants’ preservation actions. After they touched the leaf fallen from these rare Cuban plants, See their locations on Cuba’ map, identified Them in the physical space, and played with A detailed model of them, The viewers have formed a special relationship with the plant. This means that they are more able to appreciate research been done on them. Visitors will realize that without these conservation actions, they may no longer see some of these plants in Phipps. In this regard MR has deepened the relationship between visitor and information presented at Phipps, resulting in a more memorizable experience.

Another Physical storyboard

Physical storyboard 2.0 — first half of the experience in AE

The video above shows all the interfaces from a 3rd person point of view. The benefit of such a perspective is it can clearly show a subject’s movement in space (path). However, some view-centered content might be hard to showcase. In the final sketch video, I’m going to combine 1st person and 3rd person point of view. Before go shooting the video footage, I’ll do a quick storyboard so I know from what angles should I shoot.

Feedback from Daphane & some issues I identified

comparing HoloLens3 HoloStudio UI & my design solution
  1. Location of the core window? : Since I am used to layout for screen contents, I naturally put the buttons in corners (the four corners are easier to access and click with mouse or hands). However, when it comes to MR glasses, putting contents in the corner means putting them in the peripheral vision (side of eyes). But the human biological structure has determined that we can’t see clearly in peripheral vision. This means that the icons located in the four corners might be extremely blurry, hence inaccessible for users.
    I compared my design to the holo lens studio’s user interface, and I immediately noticed the window that is activated/ require user input almost always locates in the center of the vision, where human eyes can see most clearly. I will adjust my user interface according to existing MR products.
  2. Consider putting the current view-centered function into a hamburger menu. (For instance: when people don't need help, the ‘ask for help’ button occupies lots of space)
  3. What do viewers learn from each stop? Make sure all of them sound meaningful and convincing.
  4. How do viewers trigger world-centered content? They shall not just suddenly pop-up (walking normally → bump into a wall of text!). Try to figure out the details so that the design gives a comfortable user experience.
  5. From Peter: Resource- Microsoft Mix Reality. Base our designs on existing technologies and habits users already have will make our design solutions more convincing.
Some information I find useful in improving my gesture controls and placing my contents

Trying Reality Composer

screen recording with my phone ↑: a successful trial

I tried to make the scene when users pick up a leaf from the map table, then the model & name of the plan pops up in AR in reality composer. Noticed I didn’t remove the black background of the leaf. This is because Reality Composer seems had a hard time reading the leaf image with a clear background (due to the leaf’s organic shape. ) When I added a black background to make the image has an artificial regular rectangular shape, it finally worked.

← a failed trial: reality composer fails to recognize the organic shape// → tropical leaf shapes: Reality Composer might fail to read+ track

Considering tropical leaves in the Cuba exhibit comes in various irregular shapes, if I continue to make my sketch video in reality composer, I would encounter more similar issues. In order to get e decent sketch video to demonstrate my interactions, I decided to use After Effects.

However, working in the medium itself taught me a few key things that I shall consider when making the video in AE:

  • Bounding boxes aren't needed around the text if they have popping colors and readable sizes. As the image is shown below, text without a white background reduces the occlusion issue and make the text layer no longer seems like an additional threshold between users and the real leaves.
← no background + bounding box in Reality Composer VS. → white background + bounding box in PS
  • The occurring of the digital content (e.g. text 2D slate, hologram models) shouldn’t be too sudden, otherwise, users will be shocked (due to lack of preparation), hence experience an unpleasant experience. By adding delay time and making the content fade-in slowly can more the experience smoother. I would try to pair the occurrence of digital content with a sound cue to inform users something will appear on their AR glasses’ screens too.
adjusting delay and animation duration in Reality composer to achieve a comfortable expierence

Shooting+ making the sketch video

rough storyboards for the sketch video (yellow= thrid person, light-blue= first person)

There are quite a few transitions between thrid persons and a first-person perspective. Making the transitions smooth would be quite challenging.

Since I’ll have to add floating screens into videos in AE, the footage shouldn’t be shakey. I’ll use a tripod to film myself. Sunday mornings are the only time that tripods are allowed in Phipps, so I’ll try to shoot all footages I planed in my storyboard on Sunday.

the initial draft of the sketch video

Feedback from Peter:

  • For the capture function, if the technology is 10 years in the future, taking a 360° shot makes more sense. And the mono-perspective photo taken in current iteration can be the thumbnail of the 360° shot. → Add a scene where the user is sharing the scene with friends in VR.
  • The signs at pathways: should they be billboarded or flat on groud? Overall the floating text should be adding to the physical content, not become a barrier/threshold between users and the objects. (e.g.in football games and swimming competitions, certain information are not displayed on a vertical screen)
  • The bounding boxes for the text & model: adding the boxes= adding another plane, which is making the situation more confusing for users. Try to remove them if the text can remain readable without the boxes.
  • How are the interactions achieved? How to make them more realistic? Have I skipped any steps in the interactions? To figure out these questions, I’ll probably have to spend more time with the medium itself, in other words, I’ll spend more time in hololens before going back to AE.

Making the final sketch video

class notes Dec.3rd and Dec 5th: on final presentation and feedback on sketch video

Feedback from Daphane:

  • the taking photo scene isn’t very convincing; the capture function is valid but taking a photo of a random architecture in Phipps made the video less convincing.
  • The trash icon feels too strong; replace it with a ‘close’ icon.
  • Consider keeping the white background; although they don't seem athletically as good as the ones without bounding boxes, the legibility and readability shall be prioritized.

I also asked Cameron how shall I represent gaze in my sketch video and if the eyes replace the cursor, how shall the hands be represented digitally. Here are some resources that he mentioned and I found quite useful:

  1. Microsoft HoloLens: Gaze Input
  2. Eye Tracking Demo in Neos VR (Vive Pro Eye)
  3. 2016: Oculus Explains VR Foveated Rendering

The presentation

←sketch on paper →the final slide in my presentation illustrating the backend system supporting the interaction

I found it is hard to cover the whole system behind the interaction at the map table in a few minutes. For presentation purposes, I arranged the contents in a linear chronological order. But in reality, the system would be more like a CI/CD model where engineers and designers continuously add new content (new recognizable leaves) and keep refining the plant recognition algorithm through release and test stage, where real users can give feedback regarding their experiences.

I also simplified the personas. The figures at the beginning of the post are too text-dense to be placed in the presentation. I end up with only showing key features + a couple of examples for the personas.

persona slides in my final presentation

Final self-reflection

Q: How were the skills you developed in the first project similar and/or different from the second project?

One primary difference is that the first project, we are using VR to prototype an interactive experience for now. In the 2nd project, we are directly designing an MR experience 10 years from now. So Instead of considering the interaction mechanisms in great detail and make them with little bits Arduino, now I spend more time getting familiar with general structures/principles of advanced technologies. And I no longer consider if I can code/program them. Because, in order to design a future experience, I have to know the most advanced technologies currently available. But it is impossible to know the mechanisms behind these advanced techs in great detail as I did for my interaction for 1st project (due to shortage of time and lack of prior knowledge). So I just make myself familiar to many of them, then apply them to the context that I am designing for.

In this project, we are introduced to the term persona, and we have three of them. This means considering multiple kinds of users with different needs and wants. While we are targeting one persona, other kinds of users shall be able to use the Phipps AR tour and learn something from it (maybe not as much as the core user.) In contrast, for the first project, we are just considering one specific community (CMU students/staff).

Using various different methods to prototype is useful in both projects. Since each of the methods has its pros and cons, I can combine many of them to test if an idea is practical. The knowledge I gained by doing the 1st project come in handy when I have to build a physically scaled-down model. And since I am aware SketchUp is not good at creating interactive things, nor an immersive experience, I didn’t use it in this project.

Q: What is your understanding of the role of an Environments designer?
Environment designers have the responsibility to guide engineers and programmers. While engineers are diligently working on developing new technologies, we designers spend more time thinking how can these techs be applied to real life, hence serve people and improve human life’s quality. We ensure engineers are coding for the right things and creating a useable, useful and desirable future.

Final sketch video

The end~ Thanks for your attention~

--

--