Memorology: a tangible, interactive audio photo album — Ryan Qiao, Sara Wang, Huayi Li, Yihan Liu

Sara Wang
19 min readAug 17, 2019

--

Introduction

Memorology is a tangible, interactive Audio Photo Album for seniors with Alzheimer’s to record, reconstruct, and revisit the stories of their life and of their loved ones.

MOTIVATION

5.8 million Americans are living with Alzheimer’s disease. The progressive disease destroys memory and other important mental functions. People with Alzheimer’s have the need to record their life stories in order to revisit and to share with their family and loved ones before their memories slip away, but they struggle with putting their stories down on paper due to the symptoms of the disease (such as shaky hands) and the result of aging (such as poor eyesight).

OUR APPROACH

Synthesizing from our conversation with the Alzheimer’s community, our team realized how the physicality of the user experience is crucial in helping to trigger memories. Memorology allows the user to record a voice memo of their stories into photos, artworks, or personal items, with one single tap. Our elderly-friendly voice-guidance features are there to help the user in every step.

With Memorology, not only the elderly but also their family, loved ones, and caregivers are able to listen to and appreciate the stories of their life.

HOW WE STAND OUT

Existing applications or products to help people with Alzheimer’s to record their memories are rather limited. A comperable product addressing the same issue is MemoryWell, a personalized online website for users to record their memories in text or photographic timelines. The interface is very text-dense (see picture below) and difficult for the elderly to read. Rather than having the user write their own stories, their “ network of more than 600 writers interviews seniors or family members to produce a 500-word story that can be shared with family members or caregivers.” (cited from their website.) The tedious process results in high pricing ($299+ per story) and the lack of accessibility.

Our product stands out in several ways. Being a physical product rather than a digital website, our product is more accessible for the elderly. The physicality of the product also means increased compatibility — instead of having to digitized all the old photographs, the user can simply put their photo, artwork, and even collectibles and personal items into the album and start recording at any time before their memories slip away. Allowing the user to record voice memos of their stories into the items rather than writing or typing (or hiring someone to write or type for them) drastically reduces the difficulty in using the product, especially for the elderly. What’s more, is that Memorology provides not only a more satisfactory experience and result but also we do so at a much lower cost.

WHAT THE USERS SAY

Over the design process, we conducted two cycles of user evaluations, with Berkeley students as users in the first round, and an elderly with early-stage Alzheimer’s in the final round. The findings inspired us to develop the light sensor systems, the mode indicators, the voice guidance features, and the icon/text layout of the interface.

The user in our final round of evaluation told us that for him, it is really meaningful to have life experiences preserved in some form, textual or audio especially because he wants to be able to revisit the stories (especially in later stages of the disease) and to share them with his children. “And this product does an amazing job preserving these stories.” He commented that the way we made “record” and “play” feature work is “perfectly straightforward and intuitive.” From the evaluation, we also saw some directions for our next steps (further discussed in the conclusion).

Design Process

Ideation & Motivation

We believe we all have stories, and that every story matters.

For now, the problem is that there isn’t a simple/inspirational way to record the stories of your life, especially when your memory is slipping away.

How do we not let them?

To actually get to know the Alzheimer’s community, we took a trip to Fremont and attended an Alzheimer’s Community Lecture. We talked to some of the people from the Alzheimer’s community and we were very touched by what we learned from the conversations.

“My son showed me a photo of his camping trip. The woods look very familiar. It’s all so sad that when my children share with me their experience, I can’t share with them mine because there wasn’t a way to record my memory as it is slipping away.”

For some senior Alzheimer’s, we can imagine that it must be frustrating with storytelling. They didn’t have a way to record the memories when they remembered, so now it is as if they never happened. This must be frustrating for the children too because I’m sure they would love to learn about their parent’s stories, if they can.

We want to build something to help people reconstruct and revisit their stories for themselves, the ones who love them and the ones they love.

Interview & observations

After synthesizing our research and feedback for the three pitches, we decided to carry out the Audio Photo Album for Alzheimer Patients as our final project.

In the second round of out-reach, we contacted the Memory and Aging Center of UCSF, the Fung Fellowship of Health and Technology of UC Berkeley, and the dementia healthcare provider company Alz You Need.

While browsing the Alzheimer’s Association’s website, Sara discovered an Alzheimer’s Community Lecture provided by Palo Alto Medical Foundation in Fremont.

We then took a field trip to Fremont on Aug 6th and had the opportunity to talk to Alzheimer patients and their caregivers. The interview and observations below are based on this trip.

Interviews:

User A is a genealogist in his 70s. He has early-stage Alzheimer’s, where he has some forgetfulness but is still passionate about telling and documenting his life stories.

  • “What appeals to me the most is the story-telling aspect of your project. It means a lot for us to be able to tell our stories.”
  • “A lot of technology or interfaces I used have been obscure for people of my generation. It would be best to have simple and obvious buttons, and labels that tell you to push the button, and also what happens when you push the button”
  • Use everyday language rather than jargons
  • “When an interface confuses me, I’m in the habit of talking to customer support.” — it would be nice to have Q&A / support documentations
  • “Sizzles sell better than the steak — but in a literal sense” — When asked about how certain aromas/smells boost his memory

User B is a senior citizen commissioner in her 70s. A former IT expert before she retired, user B’s current work involves caregiving for Alzheimer patients.

  • “In nursing homes, we use mixed tapes or iPods to play music of the patient’s era to help them recall”
  • “Alive Inside” — a documentary about how music helps dementia patients regain and rehearse their memory
  • “Every time when I drive our senior members with dementia to their care facilities, they always tell me a lot about their stories.” “They really have the urge to express but sometimes they are worried if people would want to listen. They always ask if they’re boring me.”
  • “This project is helpful not only to the Alzheimer’s patients but also to their caregivers, and even future generations in the family who are trying to learn their stories”
  • “After my parents passed away (one of them due to Alzheimer’s), my brother and I went through their old stuff and there are a lot of things we don’t know the stories behind. Those stories are just lost.”
  • “It would be really helpful (for the audio albums ) to have prompts like “are there any interesting stories you’d like to talk about?”

Observations:

Below are some findings from the community lecture we went to

Cycle I — preliminary layout

We struggled with the visual design of the interface. At the beginning, we did not know what the ideal product should look like, or to what extent we are able to achieve. It was confusing where to put the buttons, sensors, speakers, and the indicator light, and it was not clear how to arrange everything. However, one thing is certain: it should be in the shape of a book with physical pages. From there we started the sketches.

Initial sketches:

Version #1

Features finalized:

“There should be audio instructions/stimulus”

“Each page should look like this (sketches above), in the style of a photo album.”

Features to explore:

“Should we add digital screens to the book?”

“Should it be touch-sensitive?”

“How do we implement the microphone and the speaker?”

“Should we add buttons? If so, how?”

Version #2

Features finalized:

“Pages should be fixed on the upper half of the book.”

“Leave a blank space at the bottom for play and record features.”

“Items we attach to the book should be physical, digital screens won’t work, the book will be too clumsy.”

“Implement RFID system to realize play/record interaction.”

Features to explore:

“How do we attach pages to the book skeleton?”

“How should we implement/style the sensing area, do we attach a casket, box, or should we en-thick the entire area?”

“Do we separate play and record features into two different areas, or should we merge them together into one button and differentiate using other methods?”

“How do we instrument the microphone and the speaker?”

“Should we add buttons? If so, how?”

Prototypes

Low fidelity Prototype: Paper prototype mockup

To open the book:

To manipulate items / photos:

Cycle II — achieving “play” & “record” feature

To achieve the goal of making a tangible thing, we came up with the idea that we can use RFID to identify the pictures or items in the album.

We encountered a coding problem: since we have 2 RFID readers, computer could not recognize which one does the ID came from, because our input contains only the data of IDs, but not the reader. It means that there is no way to tell whether the user is doing recording or playing.

To solve the problem, we designed another version of interface and visualized it on Figma. We put a slide bar on the lower left area for mode selection, and an RFID reader on the lower right area to sense items’ ID. This version is physically accessible and implementation wise viable.

High fidelity prototype: Figma model

  1. have users use a slide bar to select modes
  2. Merge play and record features as one sensing area
  3. Place texts on top of the feature to inform users to adjust modes before swiping a card, not the other way around.
  4. Use both icons and texts for visual hints (to maximize clarity). Place texts over the icons.

Evaluation #1 first round

We conducted the first evaluation on Berkeley students because they are not only a very accessible group but doing a user evaluation on them would also be rewarding. We are super grateful for the participants because the critiques we received made us realize what we had was actually very poorly designed.

Finding #1:

“It is confusing to me that you have to swipe the card 2 times to start and finish a recording…..I don’t feel comfortable with the fact that I have to swipe a card to end a recording. ”

After being pointed out the problem, we immediately realize how counterintuitive it is to double swipe a card to end and save a recording. We were so into increasing the complexity that we didn’t even know we are compromising the usability. This problem might be especially severe for people with dementia, all the more reasons to make sure of the way our product works matches with real-world metaphor.

Finding #2:

According to one of the evaluators, the slide bar was quite confusing because before we explained, he thought it was a progress bar that he could manipulate. It’s pointed out that it is too much work for an elderly-friendly product that users have to operate on both the bar on the left and the reader on the right to exercise a feature, to a point that it is kind of funny. Also, we learned that it is not straightforward that the slide bar represents the status since there are two control systems. For people with dementia, what if they forgot that the system status is on “record”, and accidentally erase something important? The fact that so many critiques were directed at the slide bar makes us wonder if it is worth implementing.

Finding #3:

“Yes, I like your product! After you explained it, it makes perfect sense to me!”

although the evaluator is trying to give a compliment, we didn’t quite like the way it sounds. We didn’t like the fact that our design only starts to make sense after we give an explanation; we want the product to be self-explanatory. This is crucial to our interface design because the minimum effort of using is the key idea of the entire product.

Cycle III — finalized model

After receiving a lot of criticism about the previous model, we decided not to use the idea of a slide bar. We switched back to the previous version, where the record and play area are separate. It is a little simplistic but we want to prioritize usability over intricacy.

High fidelity prototype: Figma model

Version #1

  1. Separate play/record features and place them on different sides.
  2. Use icons only

Version #2

  1. Separate play/record features and place them on different sides.
  2. Use texts only.

Implementation of readers & sensors into tangible product:

idea 1
idea 2
idea 3
idea 4 (the one we ended up doing!)

Features finalized:

“Attach pages using an album binder.”

“Build 2 small boxes at the bottom and place the RFID reader on the inside.”

“Separate play and record so that manipulating each feature requires only one action”

Features to explore:

“How do we arrange the cable?”

“How to style each sensing area?”

Implementation

RFID system

We struggled with how to convert our ideas to the real codes, and we kept changing the ideas.

At first, we want to use only the RFID reader to detect all the signals, we tap the photos at the reader, the reader will start recording or start playing the story. If we tap it again the recording or playing will stop.

We bought 2 readers from online, and soon we realize the problem. We learned that the signal sent from the RFID reader is what simulates signals sent from our keyboards. That is to say, the reader can be seen as a regular keyboard input that we type into laptops every day. So, the first thing that came to our mind was to find a way to monitor the input, and we found a python library( pynput ) that matches what we were looking for. (Actually, it’s the only library that can truly achieve this function). Unfortunately, this library cannot be used in the mac os system. We got stuck.

We spent a long time trying to solve this problem. Finally, we figured out how to get around it. We decided not to directly monitoring the input of the keyboard, but instead, we detect the input from an area called “input buffer” that stores the standard inputs of the computer. All the input data to the computer will be stored in that area. If we can access the data from the buffer, we can know what the readers read.

Then again, another problem was raised, as every input from different sources will be stored in this area, we have no way to tell which is which. That is, we have no way to tell whether the user is using the “record” reader or the “play” reader. It really bothered us a lot. We even thought about giving up and playing “wizard of oz” to solve this problem.

Eventually, inspired by the instructor (shout out to Sarah! ) and online posts, we decided to use 2 light sensors to differentiate the “play” and “record” readers. Our final version of implementation consists of two parts: we use the RFID reader to detect id (identify the item), and the light sensor to detect modes (differentiate between play & record).

Therefore our system will only be activated when 2 conditions are both fulfilled:

  1. the reader has input
  2. the light sensor detects light blockage

Light sensor

(Idea inspired by Sarah Sterman )

Reason to add light sensor: (findings from evaluation processes)

  1. Status feedback: it would be helpful for the user to see a light indicating each status (record/play). This is of crucial importance for elders with Alzheimer’s.
  2. Double-tapping can be counter-intuitive: according to the evaluation feedback, double-tapping to stop record/play can be a confusing and counter-intuitive feature. It would be more intuitive if the recording/ playing will stop as soon as the photo leaves the tapping area.
  3. Implementation limitation of the RFID reader: the current RFID implementation cannot relay the identification of the reader. Switching between record and play mode would require manual “Wizard-of-Oozing”.

The above three issues can all be fixed with the implementation of a light sensor. Dropping a photo on the tapping area will decrease the light going into the sensor. We use the information from the light sensor to control the status indication lights and to monitor when the photo leaves the tapping area (to automatically stop the record/play process without the need of double-tapping.)

LIGHT SENSOR MARK I — Breadboard Prototype

Implementation features:

  • A photo-sensor that detects the light level of the environment.
  • An LED light that’s triggered as the photo-sensor is covered.

LIGHT SENSOR MARK II — Arduino Prototype

Implementation features added:

  • Arduino board takes in information from the light sensors, outputs signal accordingly to the LED light and relays the information to the computer.

LIGHT SENSOR MARK III — Two Light Sensors and Two Indicators

Implementation features added:

  • Two sets of sensors and LEDs sharing the same circuit.

LIGHT SENSOR MARK IV — the final touches

Implementation features added:

  • Extended the jump cords to secure the sensors and LEDs on to the physical demo

System

Feedback and Voice Guidance Features

To help our user who’s struggling with memorization to easily navigate through the system, we included elderly-friendly voice guidance and feedback features to assist along the way.

  1. Greetings: “Hello! Would you like to tell or listen to some stories today?”
  2. On-boarding: “Place your photo in the green sensor area below to start recording, take your photo away to stop. Place your photo in the yellow sensor area to listen to your recordings. Have fun!”
  3. Start recording: after the user places the item onto the recording sensor area, the mode indicator light lights up, and the voice guidance goes “Recording starts in 3, 2, 1.”
  4. End recording: the user can take the item away from the sensor area to finish recording. Voice guidance for this feature goes “Thank you! Your recording has been saved to the item!”
  5. Error handling: if the user put an empty item (without previous recording) onto the “play” sensor area, the voice guidance explains “sorry, no recording found for this item.” and it goes on to suggest alternative interactions for the user “if you want to record new story to this item, please place it into the green sensor area.”
  6. Slip avoidance: if the user put a recorded item onto the recording area by mistake, the system allows the user to cancel the action (and thus not overwriting the previous story) by simply taking the photo away before the voice count down for start recording ends. After the user takes the photo away, the system gives feedback reassuring the user that the slip has been avoided by stating “Recording has been canceled. Your previous recording is safe.”

User evaluation

After our trip to Fremont to attend the Alzheimer’s community lecture, we kept in touch via email with some of the people we talked to. On Thursday (08/15) we asked them for a cafe chat in Fremont. We couldn’t be more grateful that one user diagnosed with early-stage Alzheimer’s agreed to be the user for our final round of evaluation..

We followed along with an evaluation plan we had written in advance that in detail documents what to do for the entire evaluation process. At the beginning we asked about users’ personal habit of revisiting memories, specifically if they keep journal entries of some form and what means they use to preserve the family tree and family history. We were hoping to learn how important it is being able to record memories to the Alzheimer’s community, thereby how much the idea of our product is worth.

After getting the contexts, we designed two tasks for the evaluator to perform. The first task is that, before we explain to them anything, have them explore the product on their own. It is crucial to ask the evaluator to “think aloud”, because we are really interested in what goes through their minds the first time they see it, or the first time after having forgotten about it. Their “think aloud” will give us insights into whether our product is user-friendly to the point that it is basically self-explanatory. Also, it would be a good chance to ask about their opinion on whether icons is a better idea than texts, how large should they be etc.

The second task is that, after the evaluator has explored the product, explain to him what he missed and ask him to record an experience that he had with camping under an appointed photo of related content. This will give us insights into how easy it is to perform a task with our current design.

The feedback is overwhelming. The participant was fond of the idea of an audiobook from the beginning, and when he saw the product with everything built and ready, we could tell that he was very happy and impressed. For him, it is meaningful to have life experiences preserved in some form, textual or audio, because, according to him, every story is valuable and he wanted his story to be heard by his children.

What we noticed from the first user tasks that we believe is significant is that, the user is very tentative. It took him approximately 2 minutes to be comfortable with it. He kept flipping the pages because he felt safer with the physicality of actual pages that we made out of paper. “I don’t want to break anything!”, “can I touch this area?” “what does the light do?” “will anything fall off?” it makes us wonder if we should improve the physicality of the sensing area so that it looks less fragile and less high-end, and that users, especially ones without much experience with digital product, don’t have to worry about breaking anything. For the second task, after we had given him assurance that he would not break anything, the user seems to be rather comfortable with everything. He commented that the way we made “record” and “play” feature work is “perfectly straightforward and intuitive”, and quickly finished the task under 1 minute without any difficulty.

In conclusion, we felt good about the usability of our design after the evaluation. We are happy about getting validation that our product is user-friendly, and the next goal is to make our product LOOK user-friendly too.

Conclusion

Keep with the theme of the final project of class (Authoring Tools), Memorology enables the elderly (especially those suffering from Alzheimer’s or other causes of memory loss) to author the stories of their lives and loved ones before their memories slip away.

“Simple” and “intuitive” has been a recurring theme of our interface design for this product. We want to help the user to be able to record, reconstruct, and revisit their stories in the simplest way, even if they can no longer remember how to use the product and needs guidance/ assistance in every step along the way.

We aim to develop an authoring tool for the elderly to create for themselves and to share with their loved ones.

Throughout the whole process, we realized how important it is to actually go to the community we are designing for and to talk to people in the user group to find out their needs. Most if not all of the major decisions in this project (especially the one of going physical instead of making a digital application) are inspired by the stories and experience of users.

After having users try out our demo in the final showcase and looking back at the rounds of user evaluations, we figured some directions for our next steps beyond the time frame of this class. Noticing how the user is tentative with the demo for the fear of breaking it, we realized that the product should feel and look more sturdy and durable. Addressing user’s concern as of the actual storage space of the data (currently being in a laptop plugged into the album), we can see our next step being to sync the local information with remote database to allow access anywhere, anytime. We could also implement story editing/ voice saving features in the long run to better the recording experience.

Appendix

Github link: here

Slides link: here

Final Video Link: here

Poster Link: here

--

--