IDEaTE Studio of the Future: Process

We were tasked with turning the idea studios into something that was more akin to a studio. Ideate has many facets, but something that is core to their identity is collaboration.

We took an extended tour of the spaces, and talked a lot about where the pain points that they saw were. A challenge we faced right away was picking a problem that would have the correct scope for the timeline of the project. We only had 1.5 weeks to solve the problem, so choosing the right one was going to be key to delivering something that would be useful to ideate.

The Idea Physical Computing Space

One thing we noticed after going through the space was a large reliance on whiteboards. They are critical to ideation as well as explaining ideas that are hard to visualize. We use them all the time in our design studios, and having them always accessible makes explaining any idea so much easier. The ideate rooms also have prolific whiteboard space, but the Hunt Library controls access to markers. We thought of a whiteboard marker dispenser as one solution to this problem.

Mark-It Glamour Shots

This would make it easier to simply grab a marker and go.

At this point, we knew we could execute on the marker dispenser idea. We could build a working prototype quickly, and have a cool video. However, we felt as if we had not tackled the issues ideate students were facing on a fundamental enough level. We had scoped the project in too small.


Earlier on in the project, we had been throwing around ideas for helping the Ideate students use their process in a more effective way. We realized that design students use their process to advance their product, and we wanted to give the ideate students the ability to do so.

However, we were struggling with how we could retrieve the past states of physical prototypes. We thought of robots that would fetch prototypes and then bring them to you but it all seemed far fetched.

We had also been throwing around ideas surrounding using AR to solve the problem, but it seemed like it would be difficult given the actual properties of AR interacting with the physical world. However, after a workshop on photogrammetry, we realized that it would be completley possible to take ‘save’ the state of a physical prototype in 3D and then later recall it with AR lenses.

I had a conversation with Manya where we talked about why her process was important, and I had the realization that the way she worked could be represented in a tree diagram. All of the issues that we had been having with how to represent the different temporal states could be solved by organizing the ‘snapshots’ of key moments in a tree. Each branch could be a connected prototype, expanding outwards as you explore new directions, ending as you stop exploring a line of exploration.

We then went through the interactions more in depth, storyboarding the details of every interaction. We continued on the marker motif by having the user control the system with a pen. This gave us a lot of new interactions like being able to annotate on work.

A prototype for the pen.

We We quickly realized it would be faster and more convincing to simply use a pen, so we shot the video with a wacom.

We decided to stick to something reminiscent of the microsoft standard ui for hololens for the design of the tree/icons

In order to sell the concept, we knew that it would be critical to show the 3D overlay that the hololens enables. Getting this to work with 3D tracking was extremely difficult. Here you can see the mocha tracking + cinema4D workflow that I used to get to the final animation.

When it came to putting the video together, we wanted to explain clearly our reasoning for the tree diagram. We created voiceover/interview segments that allowed us to narrate what was going on. We tested iterations that were more story driven, but found the narrative only a lot less clear.

(insert first video v1)

For the second iteration, we decide to stick to a more rapid story. We received feedback that we needed to jump into the other shots way quicker. Getting through the interview sections allowed us to show that what the product did more quickly.

Into the Future:

Currently we are working on / exploring the following things for our concept:

  • Thumbnails on the tree — help people get a better sense of their progress
  • Accessing others trees as almost a tutorial
  • An interface that really leverages AR rather than a simple touch interface
  • Higher fidelity examples/photos of the 3D overlay
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.