Pursuing Memory

What I Learned Trying the Narrative Clip

Ben Galbraith
4 min readMar 25, 2014

My memory stinks. I get it from my mother.

A creative genius, Mom is a natural artist and storyteller with enough charisma to fill any room. She’s always dreaming up new plans, new projects, and new ideas and bursting with energy and dynamism.

But on the flip-side of this raw talent is a memory that can only be described as “fuzzy.” Mom’s always coining new nicknames for the people and stuff in her life or re-inventing new solutions to old problems because she can’t remember the last solution.

I didn’t get all of her talent but I did inherit the fuzzy brain. It’s particularly problematic with names. You can be a close friend of mine, but if I haven’t interacted with you in a couple of weeks, please don’t be offended if I greet you with a hearty, “Hey… dude! It’s so great to see… you.”

I compensate for this with regular study sessions. I try to write down someone’s name whenever I meet them and then create a corresponding flashcard. It’s most effective when I can find a picture on-line for the person. When that doesn’t work, I settle for some sort of descriptive phrase, but that’s less than ideal because over time, my recollection of the face associated with that phrase can get… fuzzy.

Enter the Narrative Clip. Originally called the Momento, I thought this sweet baby might be my secret weapon since I first learned about it on Kickstarter.

The Narrative Clip
(image courtesy of Engadget)

You clip it on and every 15 seconds, it takes a picture. “Brilliant!” I thought. “Now I can always have a picture of everyone I meet”—to say nothing of getting my every day photo-documented.

I’ve been experimenting with the Narrative Clip over the past couple of months. Has it helped me with my memory problem?

Nope.

For the first couple of days, wearing the Clip was pretty neat; I got lots of pictures like this:

Notice that she’s staring at the camera, not at me. The next few pictures are of her hand.

Then I wore it to work. That’s where the problems started.

While I had hoped the Clip would be a subtle presence on my outfit, it instead stuck out like a sore thumb. People would immediately point to it and ask about it, at which point I’d explain that it was taking pictures every 15 seconds and I was using it to keep track of what I did each day.

While a few people said, “Hey, that’s pretty cool,” the most common reaction was a somewhat uncertain, “Huh.” The worst was, “I think it’s illegal for you to be wearing that. Take it off.” And a few folks let me know in polite-but-clear terms that it offended them that I was taking their pictures.

Clearly, keeping this thing on my clothes all the time in a business context isn’t going to work.

Frankly, I had hoped that the world was ready for the invasion of privacy these sorts of devices represent; after all, everyone’s got a camera in their pocket all the time now. It seems that there is a gulf between explicit photo ops and devices like the Clip that at present most of our society is not ready to cross.

So for now, I’m back to phrases like, “Smiley friend of next-door neighbor; met at park” to describe many of my flashcards. But someday, I hope society is ready for devices like the Clip—and for those of us fuzzy-brains, that day can’t come soon enough.

Last week, Dion posted his thoughts on the importance of bringing open search to the app ecosystem. I couldn’t agree more; we need a standard way for apps to expose themselves to indexing, and for mobile web search engines to launch apps with the moral equivalent of anchor tags.

While some may see clean semantic markup as what makes the web “webby”, even on the web itself there’s strong demand for providing search engines with indexing metadata in a way quite distinct from how human-intended markup is represented. It’s time to give us an easy way to make a distinction, for both apps and the web.

--

--