Carla Diana — bringing objects to life at the intersection of the physical + the digital
First published on open.bekk.com
Carla Diana has spent years exploring the intersection of the digital and the physical. Or, “bringing objects to life”, as she puts it. She’s not wrong. Just ask Simon the socially aware robot (or his cousin CURI!). I had a chat with her about our robotic future, what she’s learned from her many projects, and her love of R2D2.
Carla Diana describes herself as a “designer, author and educator who explores the impact of future technologies through hands-on experiments in product design and tangible interaction.”
Naturally, I was curious about absolutely everything she’s ever done in her extraordinary and exciting career, but I managed to restrain myself to questions about robots, smart objects, and user interface design. Mostly.
How and why did you end up working with robotics and smart objects?
I always knew that I wanted to make physical things of a certain scale, not big things like architecture, but those products that we can interact with in a more intimate way in our everyday lives. I didn’t know that industrial design existed, so I studied Mechanical Engineering, which gave me a lot of exposure to and confidence with technology. I used to love to program a Commodore 64 when I was a kid, and had the coding bug instilled early, so when I combined a love of physical things with a love of programming, it pointed the way to the most interactive of things, robots.
These days I’m working with the University of Pennsylvania. I helped found the xLab, which is a collaboration between engineering and design professors, and I’m leading a mixed-reality project around connected children’s blocks that makes play in the physical world something that also influences what happens in the digital world. I’m looking forward to being able to share the project soon.
In the bio on your website, you mention “our robotic future”. Why is our future robotic?
There’s a confluence of factors happening at the same time: sensors have become more affordable, processors have become more powerful as they’ve become smaller and less expensive, and connectivity is increasing in environments around us. These factors mean that it will be cheap and easy to put smart technologies into physical things, and we already see evidence of entrepreneurs developing products in this arena. Some of them will not be so useful, but many will become part of our future, and they will essentially be little robots.
What will be the biggest difference in our robotic future, to what we know today?
The biggest difference will be a responsive environment that gives us just the information we want in only the times and places we want them. Rather than constantly being in front of computers and mobile devices that give us all the information in every aspect of life at once, we’ll have more specialized objects in the environment, like a kitchen counter that gives us recipes to follow and videos to watch while cooking.
The conversation about robots is often “when will the singularity happen” and “what will that lead to”. Which conversation about robots *should* we be having?
Rather than living off the fear of losing control to robots that take over our lives it’s much more meaningful to talk about what we want the robots to do. There are still so many untapped applications, and the bottom line is that robots will be programmed by human beings, so we’re the ones who will set the course for what they do and what they become.
Carla has an impressive array of projects on her resume. Here is a small selection, along with Carla’s thoughts about what she learned from them.
With Simon I was amazed at how small robotic gestures like the tilt of a head or the wink of an eye could convey so much emotion, so I knew that these were the kinds of things that I want to try to incorporate in my design work moving forward.
For the Neato project, I made sure the team focused on creating a clear and consistent personality that was expressed through sound, light and movement behaviors. I learned the value of considering the whole object holistically rather than thinking of designing one aspect or another separately.
I learned a good deal about how capacitive sensors work and behave, so that I could be confident when using them in other projects to follow. I also learned that if the people using your creation are having fun, then everyone around them is having fun, too, and there is joy in discovery. When people licked the ice cream for the first time, the experience made them laugh.
With the Bloom Wearable, our team conducted in depth user research. We learned that while women may want technology and precise information in many aspects of their lives, pregnancy is a much different emotional landscape. We designed the system so that the technology played a role only where absolutely necessary. In this case it was in measuring contractions more than anything else.
The wearable medical patch was a collaboration between PARC and the Smart Interaction Lab. We started with a visit to PARC to learn about their history and their latest advances in printed electronics. We then brainstormed and emerged with key scenarios in which a product using printed electronics would play an important role. I learned that even highly complex technologically advanced products and systems can be explained through stories based on everyday life.
ENHANCED OBJECTS and the INTERNET OF THINGS
Are there any IoT examples out there that bridges the gap between the physical and the digital better than the rest?
The products that do important tasks in the physical world, such as locks and medication bottles, are the ones where bridging the gap and performing well in both physical and digital realms is key. The August lock, for example, has both an elegant and intuitive physical presence and a powerful digital one through the mobile interface.
Objects and products are becoming increasingly smarter, and will soon have “continuing conversations” with us. How do we best ensure that these “conversations” help us be more productive or happy, and not just overwhelm us with notifications and interruptions?
That’s a tricky question because most things are actually well designed on their own from an experience point of view. Where things get really hairy is when we have multiple devices made by multiple manufacturers using different protocols. This means that you have notifications that duplicate efforts and little consideration for what the end user’s real overall experience is. Over the next few years we will see a battle for companies that want to “own” the space, meaning that there will be even more variety, but ultimately the best experience will happen if they can agree on standards so that they all work together. It’s an ambitious vision, but it’s the thing that makes the most sense from a design point of view.
In one of your articles you express hope that we can design “something that people love and cherish, maybe even pass on to a future generation”. Have you seen any examples of this yet?
Unfortunately I see few examples of this in the Smart Object space. The closest is perhaps some of the classic audio equipment that Bang & Olufsen makes where the experience is so unique that it still delights after years of being out on the market. I also like the Withings Activité wearable tracker for its classic look and respect for Swiss heritage in watchmaking.
What are the most important things traditional web designers can learn from the world of robotics and enhanced objects, in order to keep up with tomorrow’s design landscape?
The key thing is that designers should be trying to master is the conversation between the interface and the person. It’s not a one-way street like a printed page. When looking at robotics, we can learn about the subtle ways that robots can communicate with gesture, lights and sounds, and pull some of those moments into our own work, remembering that small gestures can go a long way in providing feedback and helping the user feel understood.
In “Talking, Walking Objects”, you say that soon, sound, light and movement will be “equally important parts of the creative palette” and that enhanced objects might respond to “caresses, waves or verbal commands”. How can digital designers practice designing and working with these kinds of interfaces?
I’m a big believer in hands on experimentation since nothing can really replicate the experience of hearing/seeing/feeling the interface in real time. A great way to start is to become comfortable with electronics prototyping platforms such as Arduino and Raspberry Pi. It’s worthwhile to buy a collection of sensors, lights and other components just for the chance to play with them and really understand the experience in a visceral way. When the opportunity comes up to use them in a project the advance “research”, if you will, will pay off in knowhow.
In “Embracing the Wild World of Wearables”, you say that “As far as design methods go, deep research with real people is more important now than ever, and envisioning use over time through tools such as scenarios are critical.” Can you expand a bit on what you mean by this?
Since interface design is really 3D design that includes the 4th dimension of time, it’s impossible to capture the experience in one drawing or diagram, so it’s essential to envision scenarios of how the product will be used over time. This means in the course of one day as well as the course of the lifetime of the product. One method for envisioning is storyboard scenarios, where we as designers literally draw multiple frames of an unfolding story, including characters and some sense of overall context. Another great method is video “sketching” where you use video to simulate the experience of a product in real time before the details and functions have been designed.
In some of your writing you express a concern for the manipulative aspects of design. What can and should be done in order to strengthen designer’s knowledge of design ethics?
I think ethics is really important in the design profession. Part of our work always involves considering the human element of whatever we do. This includes the immediate experience of the product as well as the larger implications of having that product exist in the world. I think it’s something that’s discussed often in design school but less frequently in industry. It’s easier for independent designers to manage since they often have more freedom in selecting clients, but nonetheless can be more present in professional design dialog.
In “Talking, Walking Objects”, you say that with an increase in sentient objects in our life, the failure or success of this influx will “depend on the designers’ ability to devise interactions that consider emotional value as important as any other product attribute”. It’s been about 3 years since you wrote this article. How do you feel we as designers are doing in this area?
I think it’s getting better every day. For example, when I look at the dialogs that are programmed into interfaces like Siri and the Amazon Echo, I see a lot more compassion on the part of the robot, adapting to errors in speech, for example, or using humor when a search perhaps doesn’t go smoothly.
What is your favorite robot ever made, real or fictional, and why?
I really love R2D2, mostly because it’s one of the first that I was exposed to as a child, but also because he uses simple sounds and movements in a very expressive way without having to be very literal in terms of looking like a human.