Foundations and Practice for Design and Software — Part 1

This is the first part of what will probably end up as two or three chapters about how we can look at design as it relates to the software age.

Part 1 comprises some thoughts on what software and culture could mean for design. It is not exhaustive nor academic, these are coming from my own practice and I hope others find them interesting or useful.

In Part 2 I’ll talk about some foundational properties of these software systems that designers can intentionally mould and manipulate to achieve the goals of their projects, as well as some core skills that we can use to do this.

Over approximately the last fifty years computers transformed from machines the size of buildings used for specialized mathematics into personal devices, first in our offices and homes, now in our pockets and on our bodies. Computing technology moved from specialized devices used for computing tasks into bits and pieces that exist inside of just about everything. These bits run the systems that make up our daily lives, economies, interactions, relationships, work, and more. Even if you don’t directly interact with the internet or “computers” on a daily basis, you exist within systems that they inhabit, influence, or control.

We live in a mixed reality … a hybrid physical/digital space. Something that Kitchin and Dodge called “code/space” in their book of the same name (2011) — places where the meaning, utility, and connection come from their activation through networks and computing.

Our design practices and outputs exist in this world. Design, the primary topic of this writing, is highly intertwined with technology. Design and art have always been technological crafts — paint has chemical properties, print making uses presses and inks, furniture has materials and manufacturing, and interaction has computing and software.

A Note About Interaction and Interface

Interface has been a badly abused word over the last 25 or so years. In interaction design, and computing in general, it’s often used as short hand to main User Interface or Graphical User Interface. I think it’s useful to consider it in a larger context. I usually hate dictionary definitions in articles, but here it is:

“A point where two systems, subjects, organizations, etc. meet and interact.”

In this case the definition is interesting for a few reasons. Interaction design as a practice is focused on how people interact with things to achieve goals and create meaning. We can call this “interface” if we’re able to break away from the idea that “interface” always means “(G)UI”.

Any technological intervention that facilitates interchange, communication, control, or agency between two or more actors is an interface.

Interfaces can be seamless, invisible, and automatic. Or they can be obvious, intrusive, and complex. They can be playful, pragmatic, utilitarian, and esoteric.

Interfaces are where the interaction happens. They are where people meet systems and software and something happens.

The Software Age

In Shaping Things (2005) Bruce Sterling talks about the “point of no return” for technological integration — the point at which the world would cease to function if the technology in question disappeared or stopped working. We are beyond that point with software, networks, and computing, way beyond it.

Software has had an immense impact on all aspects of our lives, and has influenced culture in ways that we’re only beginning to comprehend. Working in software design means creating things that exist in this space of influence and impact. On the “obviously high impact” end of the spectrum we design control systems for infrastructure, mass media creation and consumption, aircraft, weapons, financial markets… On the “we didn’t realize the impact until recently” side of this spectrum, we design social media platforms that influence election outcomes in ways no-one predicted, like Facebook and Twitter.

One of the interesting things about software for designers is its lack of specific form, or rather, that it can take on and inhabit so many forms simultaneously. It can be tangible as an embedded system in a connected object, or it can be made visible through GUI. It can exist under the surface, in daemonic form, where it operates systems that make our cities run. It can be graphical, using the foundation of graphic design to give it shape; and it can be interactive, offering us levers and buttons to manipulate it. It has weight and impact, but is completely ephemeral. This poses a challenge and opportunity, but in order to really take advantage of this we need to think about what software is, and the impact that it has, so that we can make designs intentionally.

I’d like to highlight four technological and cultural concepts that seem especially interesting for thinking about design in this context:

  1. Digitization
  2. Interactivity
  3. Networks
  4. Surveillance


All media is now digital, or becomes digital. Film, television, music, photography — even if they are created in analog media they are converted to digital in order to be displayed, broadcast, shared, manipulated, and sold.

Once media is converted into numeric data, digitized, it opens new opportunities for creators, allowing media to be programmed and automated; making it infinitely malleable and impermanent; eroding the concept of an “original”; and allowing for new types of distribution and collaboration.

In his bookThe Language of New Media (2001), Lev Manovich proposes that the digitization of media not only changes the innate properties of the media, but also changes our cultural understanding and relationship to that media.

“Since new media is created on computers, distributed via computers, stored and archived on computers, the logic of a computer can be expected to have a significant influence on the traditional cultural logic of media … The result of this composite is the new computer culture: blend of human and computer meanings …“

(The Language of New Media p.63)

We use computing words and concepts to talk about how our brains work (“networks”), or how we exchange information (“download”). And conversely, we have computer-like expectations of non-computing systems and technology — we assume levels of intelligence, data access, and personalization from things that don’t have the infrastructure, and we aim to put that infrastructure into things that have no reason to be “smart.”


With the digital shift came a level of interactivity that we hadn’t experienced before. We can now manipulate, program, and interact with just about everything. Embedded computing in every day devices gives them the ability to provide rich feedback to our actions, we engage in conversations with our environments and object in real time. The speed at which things can change based on our behaviour has increased dramatically.

We have moved from a mode of consumption to one of participation. Participation can be either active or passive, sometimes so passive that it’s barely perceptible — but even then you participate by providing behavioural data, feedback, and input into a larger system that wrangles your data into something new that it reflects back to you.

From The Poetics of Interactivity by Margaret Morse (2003):

“Participation as an activity is not, however, dependent on technology; … Indeed, the capacity to involve the receiver/user in the process of, if not creation, at least second order selection and linking or assembling of elements displayed on-screen is precisely what differentiates interactive fiction and art from the passive readers and viewers of traditional cultural forms that espouse a one-sided notion of authorship. … However, the computer cannot be reduced to a medium of communication between human subjects. Its very capacity to give feedback and the immediacy of its response lends what is a computational tool the quality of person.”


The transition of “everything” into digital media means that it can all speak the same language. We have protocols for just about everything, and so much bandwidth that we don’t even know what to do with all of it.

We have devices that regulate the temperature of our buildings, monitor our safety, track activity, and communicate all of that data to each other and to our friends and family on our behalf.

We become nodes, participating in the network as one actor among many, having mediated conversations with each other, and the machines. We can speak to them directly or passively through our behaviour. They speak back to us, reflecting back a processed view of the world based on the collected behaviour of the crowd and various business rules and motives (both transparent and ulterior).

Boris Anthony, in his essay “Puppy Slugs R’ Us” (2015) argues that we’re heading towards a world where the network you interact with is a model of yourself, thus being able to give you the best curated view of all the information out there. He says:

“If you ask a person a question, assuming they understood your question, they will answer you based on their knowledge. More specifically, they will formulate an answer to your question out of their Memory, the Situation the question was asked in, and what they believe may be Contextually Relevant in that Situation. …
If someone has a record of everything you say and do on the Internet, they can create, using Artificial Neural Networks “AI” versions of you who, while keeping an eye on you, can also go and fetch information, products and services for you as you appear to need them, without your having to ask for them.
While it most likely isn’t quite the case yet, very soon, very possibly, when you talk to Google Now, Cortana, Siri or others… it won’t be some random generalized AI you’ll be talking to. No. It’ll be yourself.”

But first the networks need to know everything they can about you.


The proliferation of networks connecting interactive systems has led to a state of constant surveillance. Ethan Zuckerman writes in The Internet’s Original Sin (2014) about the origins of the pop-up ad, advertising tracking, and ultimately the normalization of surveillance as a business model. As he argues, the foundation and growth of the web as we know it is the ability to sell attention through highly targeted advertising. The advertising can be highly targeted because of the copious amounts of data collected about each person on the network — commercial surveillance at a level that we’ve never seen before.

Maciej Cegłowski writes about the impact of this cultural contract in his essay What Happens Next Will Amaze You (2015):

“In his excellent book on surveillance, Bruce Schneier has pointed out we would never agree to carry tracking devices and report all our most intimate conversations if the government made us do it.
But under such a scheme, we would enjoy more legal protections than we have now. By letting ourselves be tracked voluntarily, we forfeit all protection against how that information is used.
Those who control the data gain enormous power over those who don’t. The power is not overt, but implicit in the algorithms they write, the queries they run, and the kind of world they feel entitled to build.
In this world, privacy becomes a luxury good. Mark Zuckerberg buys the four houses around his house in Palo Alto, to keep hidden what the rest of us must share with him. It used to be celebrities and rich people who were the ones denied a private life, now it’s the other way around.”

Through both intentional and participatory interaction we leave a trace of data through the network, and software systems are designed specifically to take advantage of that. Companies use it for advertising, governments use it for “security”… We enter into an agreement with the services we use so that they can use us too.

Those of us designing the interfaces and behaviours of these systems need to be especially aware of our ethical responsibilities — how can we minimize harm, and make space for the vulnerable? How can we design interactions that help people understand and manage their digital relationships with organizations, systems, and each other?

For example, at EyeO in June 2017, Matt Mitchell, a security researcher and journalist, talked about how design can create digital “poor doors” and focus tracking and surveillance back on vulnerable communities. These systems track, segregate, and exclude people based on the data we collect about them, often replicating the discrimination and prejudice of the culture in which the systems were created.

The software age is a complex web to navigate. To make choices about what services and products we engage with we all have to grapple with a multi-faceted network of economy, surveillance, interactivity, and digitization. From a participant perspective this can be a nightmare, leading to things like malware and participation in networks that we don’t fully understand.

One of the fundamental roles of the designer in this world is to harness tools that allow us to build systems that people can participate in honestly, ethically, and knowingly. We can create interfaces and tools that people can use to wrangle this complexity into understanding, helping them make informed and intentional decisions about how they engage and what they engage with.

With these tools we can not only give people the power to make informed choices, but we can explore and expose new opportunities for people and organizations that can only exist through the careful application of design within complex networked systems.

Over the last few years I’ve thought a lot about designing for software, and all the things that software touches. I’ve written some small pieces about it, published a chapter in Designing For Emerging Technologies (O’Reilly), given a talk at Interaction’16 about it, and talked endlessly about this to my very patient friends and colleagues. But somehow I’ve never tried to summarize my thoughts and approach in writing. I’m trying it now, and hopefully this is interesting and relevant by the time I get around to publishing it.

Like what you read? Give Matt Nish-Lapidus a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.