Things I secretly wish designers would (please) stop saying

Y. A.
20 min readOct 27, 2022

--

I know this is going to sound absurd, especially given this title but, as I’ve gotten older, I’ve only grown more and more appreciative of viewpoint diversity. This is to say that I really do not like making demands of people — least of all concerning demands about what they should (or should not) believe or say. With this in mind, I encourage any reader to come to their own conclusions at all times, as well as to consider this what it is: a mere opinion piece.

But putting the throat-clearing to one side, in the interest of truth, I have a confession to make: I sometimes feel that designers have a tendency to say things that can frustrate truth and sincerity, as well as self-effacing, honest, and good faith discussion.

Actually, I sometimes wonder if designers are all too likely to talk about things in ways that are obscurantist in nature — in ways that overly complicate what are otherwise pretty simple matters. I often wonder if this compulsion comes from insecurity about the role of design in companies — is there, perhaps, a desire to overstate one’s importance? Could it be that it comes from a desire to compete with the clear and undeniable importance of, say, engineers? Should professionals (designers or otherwise) be protectionist at all costs? Or should they be sincere at all costs?

I do not know the answers to these questions, or if the premises on which they are built totally bear out. But, for my part, wherever reasonably possible, I will always vote for sincerity. With that said, I don’t write any of this to talk about semantics: I’m chiefly concerned with content — ideas. What I mean is that I don’t take an issue with the sentences, words, or phrases, themselves, but rather the concepts represented within them. To that end, here’s that promised list.

1. “I’m a human-centered designer”

The trouble starts

I’ve spent something like seven or eight years in my career as a designer, and so I know this will not be taken well but, given that I’ve established that I prefer sincerity over ego, I’ll just say it anyway: I’m still wondering what this phrase means, exactly.

What I know is that a great deal of people in the industry talk about the importance of being “human-centered”— I’ve heard it many, many times throughout my career: in job descriptions, meetings, interviews, bootcamp curricula, Medium posts, marketing material from agencies of all sorts, on designers’ personal sites, and so on. I also know that a nontrivial number of designers either describe themselves as “human-centered” — or want to. This is all fair enough.

But, given that I like to try and approach truth as best I can, I would really like to know what this phrase means. I have taken to Google about this many a-time, but I’ll try again. The first result, from Harvard Business School, tells us:

Human-centered design is a problem-solving technique that puts real people at the center of the development process, enabling you to create products and services that resonate and are tailored to your audience’s needs.

Naturally, I have questions:

  • How does one “center humans,” exactly? What does “centering” entail? Specifically.
  • What is the opposite of “centering humans”? Does it involve “de-centering humans”? What does that entail?
  • “Human-centered,” but as opposed to what? Not-human? If you make clothing for cats, are you expected to be both “human-centered” and “cat-centered”?
  • Is it possible to, as the definition says, “create products and services that resonate and are tailored to your audience’s needs” without engaging in “human-centered design”?

These, I think, are the softball questions. I think these are the harder ones:

  • Were the teens and twenty-somethings working on what would become Meta “human-centering” when they made their first product?
  • What about the twenty-somethings who came up with what would become Snap?
  • What about the ones who came up with what would become Instagram?
  • What about the ones who came up with what would become Twitter?
  • What about the ones who came up with what would become Slack?
  • What about Evan Williams, when he created the platform on which you are reading? Was he “human-centering”?
  • Were bronze age Egyptians “human-centering” when they invented humanity’s first spoons?
  • Were bronze age Canaanite peoples “human-centering” when they developed the world’s first alphabet?
  • What about our Paleolithic ancestors? Were they “human-centering” when they made humanity’s first musical instrument?
  • Or humanity’s first funerary rites?
  • Were our hominid (but non-human) ancestors a million years ago “human-centered designing” when they realized they could tame fire to make safer, tastier food?
  • Was “human-centered thinking,” itself, made with “human-centered” thinking?

To my knowledge, none of these innovations (or any other ones that have unrecognizably changed the world) were made with the ideology of “human-centering” that began to take shape in some obscure art and design circles in (circa) the 1940s. This is okay, and does not necessarily disconfirm its claim to truth. At the same time, however, the existence of these examples also, specifically, do not confirm it.

My concern is that every innovation can be retconned into being an example of “human-centering” — provided one is motivated enough. The problem with this is that motivated reasoning tends to result in ideologies becoming self-confirming — and unfalsifiable. When this happens, it is generally understood that the ideology becomes impossible to prove.

The unfalsifiable, of course, is a matter of faith. That said, there is nothing wrong with faith — it’s simply not necessarily a matter of truth, fundamentally, and, given that this opinion piece is concerned with approaching some notion of it, I’m not sure that matters of faith are within its scope.

The trouble continues

Putting the question of the truth of this ideology to one side, I’d like to consider it from the perspective of linguistic utility.

To start: can a product be said to be “human-centered” if most people could pick it up and mostly figure out how to operate it in a reasonable amount of time? If so, can the phrase “this product is human-centered” be said to be one that roughly translates to something like: “this product is easy to use,” or “this product fills a market need”?

If so, can we just say that? That it’s easy to use? That there are customers willing to buy this product? What is the specific advantage to saying “human-centered” in this context? Isn’t the phrase “the product is easy to use” (ahem) easier to use than the phrase “the product is human-centered”? Isn’t the former both easier to say, and easier to understand?

If I had to venture to guess, most people off the street would probably understand one more than the other, as well. So why do we specifically use this phrase? What advantages does it confer over simpler language?

I know it’s a phrase “anyone who’s anyone” is using. And it’s true that most bootcamps will tell you to strive to embody Humanism-Centeredism-Designism Thought. And, also, if we’re honest, calling oneself a “human-centered designer,” sounds, like, really cool. I get this, and I think it’s reasonable to respond to these pressures by conforming. But also: why do what you’re told? If conformity, in this case, confers neither a strategic advantage on the market, nor a linguistic one, then what is the reason to conform at all? And, what’s more, if all designers are self-describing this way on their sites, is it really a good idea to do that, too? Does this really tell the recruiter and hiring manager anything, since everyone else makes this claim, too?

All the world’s most successful people did what they thought was right and reasonable in the moments that mattered. From what I’ve seen, there is no Ism or Thought you put in to get brilliance and wealth to come out. To me, it seems that the truth is obvious and boring: successful people became that way because most of them worked on their ideas every day and tried to do what they thought was right with the information that they had. If they could manage success by just being rational, — using normal, human reasoning, no fancy ideology included — why couldn’t you?

2. “I use design thinking to…”

Similarly to the above, I’m not sure what “design thinking” means, exactly, but everyone uses it. So I’m not sure that a recruiter or hiring manager would give anyone credit for saying that.

Nonetheless, there was a time when I felt there was something to understand, mainly due to peer pressure (feel free to read about this, if you’re curious):

Because there was a time when I believed, I read a significant amount of material on this topic over my career, including texts that are considered seminal to the creation of “design thinking.” What I noticed was that each article or book that attempted to define what constitutes “design thinking” was different to the last.

I will come clean: despite reading many descriptions of this concept, I found none convincing. Respectfully, I do not know what advantage “design thinking” specifically confers over, you know, like, regular thinking. For some time, I thought I was the only one who felt this way — until recently. Dominic Francis, a designer and researcher, writes:

I’m VERY anti design thinking. I don’t really know what it is even though I’ve been working in the UX Research and Design space for over a decade…All too often it’s only something fanciful of Post It Notes on the walls, or new ideas that are either not new or just plain not feasible…There really does need to be more of a focus on good products, and not artsy terms such as Design Thinking or Business Design.

A researcher, Josephine Giamo, echoes Dominic’s sentiments, adding:

There are more than 40 definitions of Design Thinking out there, by last count.

This is the problem with obscurantism: truth becomes tenuous, and people spin their wheels defending or attacking a constantly moving target that has no rigor in its definition. For my part, I’d have to agree with Dominic’s (and Josephine’s) sentiments: we should “focus on good products,” as he says — not on semantic arguments about this or that concept or ideology.

Our species has always made things with the express intent to exchange those for other things. That’s what products are: things that satisfy market demand — ideally, that market demand actually exists, and is consistent and strong enough to reliably build a living on (i.e., a business).

You’re not going to get rich quick any time soon, though: there is no science, framework, methodology, or ideology that is going to get you viable products to sell on the market in a reliable, repeatable manner. If there was, every country on earth would be drowning in entrepreneurs, and extremely resource-rich and famous companies would not eventually fail to innovate time after time. Finding people who will buy your product(s) or service(s) — consistently — is a matter of chance and perseverance, which is why the chance of businesses failing is so high. This is the boring, painful truth: making stuff that people want to buy — especially consistently — is hard.

Nonetheless, there is a market for ideologies, methodologies, and frameworks that claim that they have the secret sauce to overcome inherent difficulty in making successful products and businesses. Certainly, people will espouse all sorts of ideologies for various reasons, — as we’ve seen through all of human history — and people will also pay to learn about those ideologies.

I understand the incentives to create Thinkings and Methodologies and Frameworks, and so on. Certainly, it’s a form of personal or company branding that can (1) elevate one’s status to “thought leader,” which can sometimes be a goal in itself, but it can also (2) lead to more customer acquisitions, especially if one sells consultancy services.

Most consultancies try to develop their own “playbooks” for this reason: they are attempting to sell an image of themselves as being the place with all the answers if you just follow these five simple steps (which, by the way, they only happen to sell!). In the words of a client of mine who consulted with me at a design and development studio I worked at: she picked us because she felt we were “a Lamborghini.” And there’s nothing wrong with this — we all need ways to attract customers, and we’re all seeking differential advantages to that end. Appeals to authority are things many customers are attracted to, and they will either pay to bring those ideologies into their workplaces via talks or conferences, or pay to build products using these ideologies.

The fact that there are markets that form to connect clerics to hopeful tithers has no bearing on whether the sold beliefs are true, however. If this was the case, major questions of faith our species has wrestled with for its entire existence would be unequivocally resolved by now.

3. “I’m a systems thinker”

The trouble starts again

This is another phrase that feels opaque, even after all these years. Typing “systems thinking” into Google, a Wikipedia article returns:

Systems thinking is a way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts.[1] It has been used as a way of exploring and developing effective action in complex contexts.[2]

To put this in the context of design, people are often said to be “systems thinkers,” for example, if they are able to notice that changing some component or product in one place will necessarily impact some other established component(s) or product(s), or other views in the experience. So, if some change happens in this part of the product, it might necessarily require some rejiggering in some other.

I guess I’m not sure how this can be said to be differentiated from just being informed. I’m not sure that I’m convinced this constitutes a new kind of “thought,” either, but is rather a function of understanding some basic concepts about how view technologies work, experience in reasoning through flows and component states in digital products — all of this powered by basic cognitive ability and deduction our species has evolved to have for free. It just takes some experience to catch these edge cases, or spot similarities that could be bridged and simplified across a digital product.

Design systems, themselves, are also not very complicated. It’s knowledge that is totally possible to acquire, so long as one has the desire and drive to put in the elbow grease needed to learn some lightweight technical stuff about contemporary codebases, markup, and a few Figma features.

And it continues

As you might have noticed, there’s also a pattern emerging here: there’s “human-centered thinking,” “design thinking,” “systems thinking,” and I’m sure much more. Some of this is probably coincidence, but this pattern does feel worth noting nonetheless. It is one that concerns me. I don’t know that I agree that there are “forms of thought,” as if to say that there’s some “thought” inherent to some people, as if a function of some physiology of the brain, some kind of “archetypes” of thinkers. I sometimes suspect that the belief that there are, however, becomes a way to shut down conversation or disagreement — if there are “forms of thinking” people either have or just don’t, and these are not easy to define, then why wouldn’t I be incentivized to shut conversation down by saying “you’re just not enough of a systems thinker to understand”?

This starts to feel like it’s no longer about how reasonable I think someone’s assertions are,—the quality of their arguments — but rather about how much of a “right brain thinker” or “systems thinker” or “human-centered thinker” or “design thinker” or whatever else someone fundamentally is,— their character — with no way to disprove those assertions, because those concepts, themselves, have unclear definitions.

I don’t know how to acquire those traits those “thinker” archetypes are said to have. But I do know how to acquire experience, evidence, and good quality explanations of my reasoning. I also know how to have reasonable, straightforward conversations that can reveal gaps in information or reason among all parties, and how to fill those together to come to some understanding.

I don’t know that I even believe there’s much evidence to support that there even are archetypes of “thinkers.” In my view, it’s all just basic, boring, human reasoning. I am not extremely interested in trademarking new forms of “thought,” or “thinking.” In my view, the engine that powers thinking is basic cognitive ability, and it’s inherent to our species. I know that that’s boring, and that there are no attendant, cool terms for this — but I like boring, personally.

4. “I’m an empathetic designer”

I know, I know — this is another common way a great number of designers describe themselves. I’m not sure I understand the criteria required to be “an empathetic designer,” but it does seem a lot of people meet that criteria, so I guess that is a good thing. What’s more, given the amount of people who self-describe as “empathetic designers,” I’m not sure that this is something worth highlighting on one’s personal site, but that’s neither here nor there.

To be sure, I understand where the desire to make this claim comes from: most bootcamps tell aspiring designers that they “must have empathy for users,” and that that’s what design is all about (called “empathetic design”). You can hear it in some workplaces, too, with even people in leadership using those terms, and asserting this as a requirement for our work. What’s more, lots of books, Medium posts — and other sources of “thought leadership” — all extoll the importance of “having empathy for users.”

Personally, I believe empathy is a function of human reasoning. Other mammals cannot predict the emotions of the mammals around them. But we can, because we have an advanced level of cognition that allows for recognition of other things being alive, apart from us, and that we are one of many (“self-concept”), as well. This is a level of cognition that is sometimes called “self awareness,” sometimes tested via something called a “mirror test.” Almost no other individual animal or species outside of our own has ever passed this.

But, importantly, empathy is most often used in a way that indicates someone is having an emotional response — that someone is “empathizing” with you. This usually means that, not only is the individual able to intellectually reproduce why you’d feel the way you do, they are also personally affected by those feelings in some way. For example, an individual can be said to be experiencing empathy when they sincerely feel emotions of sadness when another individual confides in them about a recent death of their dear loved one. For the person to have that emotional response, they have to have a level of advanced cognition no other animal has: self-concept, advanced object permanence, the ability to parse advanced language, etc.

I also believe none of this bears any relation to building good products. In my view, the ability to determine a market need requires perseverance and good reasoning skills. Jeff White, a designer, writes:

“You have the empathy of a tree stump.”
- My wife
Guess I chose the wrong line of work. 😳 […] But I’ve also had what some would consider a long and reasonably successful UX design career.
My point is: There’s a lot of jargon flying around out there. Don’t get too caught up in it.
Focus on [the] essentials to be a successful UX designer … [And b]e yourself and get good at that stuff in your own way. You don’t need to be the Dalai Lama to be a great UX designer.

I would have to agree. I understand that some may feel that “empathy” is important, and most especially since it’s an important step in some Humanism-Centeredism-Designerism Thought. I get this, and I understand. But, in my view, good reasoning skills are all one needs. One needn’t feel anything if one isn’t inclined to. All one must do is think, deduce, reason, and make good choices based on this.

5. “I’m a UX designer, not a UI designer”

I know a lot of people believe that there is a choice to be made here between the two, but I don’t agree. I believe there are many reasons that can explain why so many believe this: many bootcamps operate on outdated understandings of product teams — in the past, it was common to have “UX” designers who outputted gray boxes, and then “UI” designers who would jazz those up. This hasn’t been the case on most product teams in a long time, but could be one explanation for why so many people — especially their students — misunderstand this. Quoting myself for expedience:

In reality, you do not get to choose whether you are a “UX” or “UI” designer in today’s market. This is an academic discussion about a market that no longer exists … The reality is that you are going to experience increased difficulty finding — and retaining — customers as a designer if you do not make professional looking work. When FAANG companies hire designers under titles like “UX designer,” or “product designer,” or “interaction designer,” these are all the same jobs … But there is an additional expectation — not always explicitly stated — that you will put in the effort to make the product or feature look professional. There will not be an attendant visual designer at your beck-and-call to jazz up your schematics. It’s just you.

Because technology has improved (e.g., the introduction of asynchronous requests via AJAX — some people define this as the point where the web became “web 2.0”), user expectations around what can be done — and how they can interact — with software in their hands have evolved. It’s not practical anymore to cleanly excise these functions, which is why most companies do not. …

What’s more, peopled hired under titles like “UI design,” or “visual design,” are unlikely to do product work at all — they might work on design systems, or they might work as illustrators making the iconography or rasterized product imagery, or they might design the product’s marketing or support sites, or other brand work. These are totally different jobs than that of the product designer’s. …

You, by contrast, are working on the product — your customers want you to make products that make sense, but they also expect you to bring excellence to it in other ways: namely, aesthetically. Therefore, if one wants to reliably meet market expectations, it would not be wise to try to “it’s not my job” one’s way out of this. This is the nature of your job now and, to participate effectively on today’s market, one would do well to accept that.

Designers are expected to bring excellence — both in functionality and aesthetics. Generally, your customers care about both, but it’s up to you decide if you do, too, and how you want to sell your services.

6. “You have to frame the conversation”

This is another phrase that feels opaque. This can sometimes be said when fellow designers are trying to tell others how to talk about topics in presentations or critiques. There’s a claim that “framing the conversation” is important to “getting good feedback.”

I understand why someone would take this position, — especially given that it’s so common — but I don’t share it. I have lead and participated in countless presentations and critiques across what’s probably a dozen different companies, and my conclusion is that people are going to say whatever they say, and think whatever they think, no matter how you “frame” a topic you’re discussing. This is okay and we cannot control the thoughts and feelings of others.

If one should like to move on from one part of a conversation that’s happening, simply say that and do so. Otherwise, I prefer plain speech that is easy to understand. I don’t want to sweat the small stuff of human language — it’s just a tool to communicate information. I prefer to use it very simply and sincerely. This means I do not like “frameworks” used to structure my sentences, and I don’t believe they materially offer any advantages. I don’t believe these “frameworks” have created any meaningful change in the quality of the responses I’ve heard or given in critiques or presentations in my seven years, and I would need evidence presented to change my mind on this matter.

So my response to the title of this section (“you have to frame the discussion”) would be: “I don’t agree that I do.”

7. Referencing “The Data” when there is no data

This is pretty common, in my experience. To illustrate what I’m referring to, I’ll just share an anecdote:

Myself and another designer couldn’t agree on whether to put some new view in the product behind an icon button in the top nav, or a second tab. I didn’t have strong feelings about either option, but I had a preference for the button. Nonetheless, I could see an argument for either direction, but I didn’t think it was such a mission critical decision either way. In my view, these kinds of things can be resolved by first doing what we think is right and, if we believe it’s worth watching for, we can do A/B tests to watch how people respond to those two funnels against each other.

The designer asserted that putting this new view behind a tab would be “the most data-centric way” to do this, the provided quote being nearly verbatim (according to my recollection). I don’t normally push back when people clearly feel strongly about matters in professional contexts, but I decided to try it this time: I asked what data the designer might have that would support the claim that the tabs approach would be “more data-centric” than a button. From what I understand, neither of us had access to any experiments run on this feature — this is because this feature did not exist at that point. But my hope was that the designer had some information that I did not.

However, from what I could tell, this was not a reference to some experiment the designer had exclusive knowledge of. This was actually an appeal to authority. Saying “the data” or “data-centric” can sometimes create an air of authority in the one saying it — when used with this intent, it can result in conversations being shut down. “Data” is in the sentence, despite the evidence (i.e., the data) not being provided so, surely, it must be right. Right?

This can be okay — we don’t need to scrutinize every claim everyone ever makes. But it should be understood that, if one is actively attempting to convince others of their position, one needs evidence. What’s more, one should be honest and only refer to evidence one actually has — rather than using techniques to shut down conversation and misrepresent one’s knowledge and evidence. Besides, data pulled from experiments can be read in many ways — in the words of an engineer I interviewed some years ago (he was working on machine learning tooling at a Boston company that competes with TensorFlow), if my memory serves:

You can use data to argue for anything, if you want it to. It can even be made adversarial.

The situation described earlier is just one anecdote, but I have witnessed this professionally before as a bystander. For me, the lessons learned are:

  • If you have an assertion you want to make, make it,
  • If you have evidence for it, share it,
  • If you don’t, just say that.

We should all be willing to at least consider views from our colleagues, whether or not those views are well-supported by robust evidence. A famous adage, popular in economics programs, tells us:

Not everything that can be counted counts, and not everything that counts can be counted.

In the end, the decision makers make the call, but considering other perspectives represents neither harm nor foul. Attempting to shut other views out by asserting to have evidence one does not have (for example, via appeals to authority that uttering the word “the data” provides), is probably not a good idea if one cares about having warm relationships at work.

This has gotten long, so I’ll end this here. My main assertion is that simple speech is best, and so is seeking to simplify concepts — rather than the opposite desire, which would be to make them much more complex than need be. I understand the compulsion to add complexity to things, and make things sound more fanciful than they are, but I would advise against all that, and to not take the advice of anyone who suggests that. When possible, speak simply and sincerely, would be my advice.

Separately, I do feel some concern about a larger issue I believe exists in the design industry: the level of (what I believe are) specious concepts that take root in our work. My preference would be for warm but firm discussion at all times (at work, or otherwise) that is grounded in tangible, easy-to-follow explanations of ideas. The amount of articles and programs focused around ideologies and frameworks concerns me — while I believe that those more academic pursuits can be interesting, being pragmatic, getting results, and being competitive on the market should all be more important.

--

--