Visual design — the designer’s Great Satan

Y. A.
21 min readDec 9, 2022

--

For my entire career in design, I have observed what appears to be an industry-wide low opinion of visual design (and its attendant visual designers) among product designers — or, as they were once called, UX designers. Such as it was, the UX design industry was in a golden age — there was a focus on making good interactions, doing good research, and all rejoiced. All was well.

But that all changed when the UI designers attacked. UX design became the dreaded UX/UI. This was verbal proof that visual designers had bastardized UX — they transformed UX into a shell of its former self, averting the eyes of all away from the important things (e.g., interaction, research) and, instead, towards such frivolous matters as a product’s aesthetics. The market (i.e., employers and clients) became equally as vapid, expecting time put into a product’s appearance, where they didn’t used to have such unreasonable expectations. After all, customers cared about the real things — the important things.

UI design’s terrible twisting of the industry is the cause of many problems within the industry, itself — it has been said to be the source of things like: worse software made across the industry via the phenomenon of “Dribbblization,” the reported decline of research in organizations, a saturation of hopeful talent at the top of the funnel (i.e., pre-career designers) rather than at the bottom (i.e., experienced designers), and so on.

But there’s another problem: I must confess that I don’t believe any of this has actually occurred.

Some historical context

I started as a designer around 2015. My first role was a “UX/UI design” role, though I didn’t know what that was at the time. I learned about what “UX/UI design” was months into that very job, when I heard colleagues presenting their thoughts on this career (until that point, I’d simply thought it was another job I’d had! — one can imagine my surprise to discover it was, in fact, a career), which they’d share during our “lunch and learns,” usually after returning from design conferences that discussed this topic. Through this, I learned that what I was being paid to do had a formal name.

This is all to say that, in 2015, this interpolation of “UX” and “UI” already occurred, and it seems like it was fairly established (e.g., being discussed in such large contexts as conferences and workplaces, etc.). This was nearly ten years ago now. If there was a time when this concept didn’t exist, it doesn’t appear to be the case as early as 2015, at the very least. To my surprise, chatting with older designers, I’ve heard some remark that, as early as 2008, they’d heard that very term, as well.

2008 was an interesting time. Many observe that this was the time when SaaS (software as a service) companies truly became viable and widespread — when they’d truly begin to pop up in ever-greater numbers.

The reader may be inclined to notice that 2008 was also the time of something else of importance: the creation of iOS’s App Store. At this point, it became easier than ever to get software in the hands of consumers. Previously, to get customers using one’s software, they either needed to visit an address on the web on their bulky computers, or get them to install your executable (which, as you all recall, often came with numerous installation issues for users). On consumer-grade OSes (e.g., Windows and macOS), there was no centralized package manager, and so executables had to be found in the wild, with users visiting those disparate addresses to solicit those apps. Malware abound!

Then came the App Store, or the “there’s an app for that” age. Software companies could get their code running on peoples’ mobile devices easily, and they’d stay updated, with a trusted, centralized source where software without malware could be found. From this came a boom of software and their attendant companies cropping up — absolutely everyone was making an app for everything.

The large change that Apple introduced into the market in 2007 (the Apple phone, and its attendant OS), itself, ushered in a new kind of culture around software and, with it, a change in consumer expectations.

An image of a Motoral Sidekick. This phone was super cool for its time.
A screenshot of the original iOS interface.
Left: the Motorola Sidekick, a certified middle school classic (2008). Right: a screenshot of the very first iPhone’s OS (2007).

Technology was changed forever. Consumers would accept phones on the left, but not for much longer. Apple came in, cut all of the features above (e.g., physical keyboard, mechanical folding of phones for convenience, overly complicated OS, etc.), turning up to the market with a box and a screen. People absolutely lost their minds. And for good reason, as these changes were substantial: for a long time, Apple had already been generating a lifestyle — bespoke hardware, but also software. But, after it unveiled its phone, it created something unto a cult, as many quipped.

If you watch this, you get seven days before you are inducted into the cult of Apple, too. Them’s the rules.

When looking at the Motorola Sidekick above with perfect hindsight, the reader couldn’t be blamed for seeing it as almost a marker of a bygone era — a metaphor for what consumers used to accept, and how business was done. Messy software, buttons everywhere, a feature for everything you could never want nor ask for, and so on. When Apple cut all the fat and delivered only what the user wanted, — and nothing more — what followed could almost be described as a market correction on feature inflation.

Certainly, with the proliferation of software that the iOS’s package manager enabled, and, with the differential advantage of well designed hardware and software that Apple created for themselves on the market, there was, too, an increased conscientiousness around how software should look and behave, aligning with evolving consumer expectations. It is around this point where companies like Apple began to formally hire people to just think about software (and hardware) appearance and behavior, where it was historically the domain of people putting the hardware and software together (e.g., engineers).

Before this point in 2008, it seems having roles for this focus was not common. Most practicing designers at the time, themselves, were likely web designers, if at all — perhaps they made templates on e-commerce platforms, or made marketing sites for businesses, or worked in agencies that made ads and micro-sites on the web, etc. But that’s the closest the industry had to people thinking about software’s external design as deeply as we do today.

A paradise lost

When I consider the narrative above, — that conniving and vapid visual designers came in and took our golden age from us — it doesn’t feel like it represents anything to my recollection.

I recall using computers (and the internet) as early as five years old. My parents didn’t understand the appeal, and they were perplexed by its absolutely abysmal interface, and my addiction nonetheless. They didn’t see what I saw — certainly, I saw the same confusing interface that they did but, putting that to one side, I could look past that and see so far beyond the annoyance of its slow, poorly performing, and generally frustrating exterior. I saw fun games I could play for hours on end, or fun things to read, niche forums where people discussed things no one talked about around me, and so on. I could spend literally all day on the computer.

In retrospect, allowing children unadulterated access to the internet was most certainly a bad idea — the knock-on effects of this we are feeling to this very day. Nonetheless, I was part of a demographic (geeky, tech-curious children in their single digits) that can be said to have been relatively early adopters of the internet, and personal computing more generally.

Putting cool math games to one side, my parents were completely right: computers really were absolutely horrible. They had a feature for everything you didn’t want, and nothing that you did. Nothing “just worked.” Errors happened and they weren’t human-readable. Everything was a hack on a hack. They were slow. They took thirty minutes (and, I recall, at times, hours) to boot up. Malware was rife — popups everywhere. A 200KB image would literally decimate your entire setup. Websites were spammy and full of e-garbage. Everything about it was objectively awful — the digital world was totally underdeveloped. Its visuals reflected that.

A screenshot of how awful the internet was. It’s got text everywhere and looks spammy.
Someone extend the digital world an IMF loan. Credit: “RIP GeoCities: what the internet looked liked before the internet was cool.”

I’m not the only one who remembers this weird time in software land. As Emily Gosling writes in RIP GeoCities: what the internet looked liked before the internet was cool:

Remember when the internet was just for geeks and gamers and the sort of people who fell in love with pneumatic virtual reality babes? Before it was all “millennials” and super-styled pictures of dinner, it was a bright, flashy cornucopia of fucked-up alien heads and planets and stuff like rooms. … [A] WTF level of weirdness…

Websites looked like the above. For webapps, behold Neopets (circa 2000), a popular webapp among children at the time:

2000s software: not even once. Credit.

Clearly, the internet was a complete mess. But it wasn’t just the internet that was horrible: native software (that ran on your machine, e.g., Windows, macOS) were train-wrecks, of course, too. Check out a basic text editor — a popular one for its time, Microsoft Word 2000:

A screenshot of how overly complicated Microsoft Word was.
Average Ohio software. Credit.

Today, web-based text editors are a dime a dozen — they’re fast, have very few UI elements (given that almost none of the UI pictured above was necessary in the first place), and so on. Microsoft Word 2000, by today’s standards, is hilarious.

And, anyway, good luck trying to find basic settings on your Windows machine. There wasn’t a way to search your machine for settings (as you can today), nor a centralized place to control various settings on your device until much, much later. In fact, this came later for macOS, too. Early OSes were all about the desktop — from where you’d launch other software, and where most of your computing would happen. But, today, people just use desktops as dumping grounds for Screenshot 2022–12–09 at 12.37.34 AM and maxresdefault.jpg. For everything else, there’s Spotlight search.

But the absolute insanity didn’t end at software. Check out standard hardware for the time:

An image of a Gateway laptop from 1997. It sucks.
This machine comes with an extended warranty and an apology. Credit.

Today, having lightweight machines with big trackpads and high quality displays are features that are not negotiable. But times were different.

All of this to say: was there ever a golden age of software but today? If there was a time, presumably before 2008, — or even 2015 — where it was “just UX,” and things were better, why was software so bad? Why was internet adoption so low during that alleged golden age? I booted up a computer one day in 2000 and never logged off, and I tolerated way more then than I have the appetite for today. On every single metric you could possibly quantify, the digital experience of the time was objectively inferior to that of today’s — ironic for a time when UI designers have reportedly ruined the industry.

But I don’t see a ruined industry today. I see one that makes leaps and bounds in improvements with every passing year. I remember lugging around a very heavy Gateway laptop in 2009, and having a meltdown at my recalcitrant printer the night before an essay was due (you guys remember printers? I no longer wish to). By comparison, the digital world I exist in almost all day today (terminally online) is vastly superior to the one I remember growing up in.

A screenshot of Facebook 2005. Hilariously bad.
A screenshot of today’s Facebook profile. It’s just better.
Left: Facebook profile, 2005. Right: Facebook profile, 2022.

For example, above, you can see two renditions of the same product: the Facebook profile. Apologies for the Windows context menu in one’s face but, putting that to one side, the right is very obviously better in every capacity — it’s easier to read, feels less spammy, is more focused for what users would generally want in this view in that it has fewer extraneous features (why would someone ever want to “subscribe via SMS” to anyone, ever? Even in 2005?), and just looks more professional.

A screenshot of Amazon when it was somehow worse than it is today.
If you can believe it, Amazon was worse. Credit.

Can it really be argued that software of the past was better than that of today’s? Would anyone sincerely like to RETVRN to Geocities days? Or even 2008, the year of the Motorola Sidekick?

More technical expertise, more fun

Though I hear frequently from designers (usually making it a point to explicitly refer to themselves as “UX designers,” rather than more common terms today, like “product designer”) that UI designers are the cause of seemingly everything that’s terrible about the tech industry, I just don’t know how that’s possible.

Cheese with three circles highlighting the phrase “UI designers cause cancer.”
Behold: the cheese of truth.

The claims, as mentioned above, are frequently as follows:

  • There was a moment when “UI design came in” and “changed” the industry to be “UX/UI,”
  • There was a time before UI designers “came in,” and it was glorious,
  • There was a time when research mattered more, and better products were made.

But I don’t see evidence for these claims at all. In fact, I only see evidence to the contrary:

  • I don’t recall a specific moment when there was some kind of mass influx of visual designers into the industry — in fact, I remember a time when today’s product designers upset at those who have visual skills staked their own beginnings as “web designers,” themselves who were mostly just worse graphic designers making templates and marketing sites online,
  • I don’t recall a time when there was a golden age in digital products before this point — the internet (and software, in general) was objectively horrible, mostly did stuff no one actually wanted (solved problems no one had), and made things often worse for people — the adoption of it was summarily low,
  • I don’t recall a time when research bored out fruits of its labor more than it does today: the age of infomercials and useless software is truly over — the competition is simply too stark, and consumers are increasingly informed and discerning. Flop products don’t sell, most businesses fail. Consumer tolerance is so low it’s basically catching drinks in a tavern in hell.

In fact, the internet was so bad that writers at Newsweek — an incredibly popular publication in the States — predicted it would flop back in ‘95:

What the Internet hucksters won’t tell you is tht the Internet is one big ocean of unedited data, without any pretense of completeness. Lacking editors, reviewers or critics, the Internet has become a wasteland of unfiltered data. You don’t know what to ignore and what’s worth reading. Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them — one’s a biography written by an eighth grader, the second is a computer game that doesn’t work and the third is an image of a London monument. None answers my question, and my search is periodically interrupted by messages like, “Too many connections, try again later.”

He had a point. The internet was really that bad.

Nonetheless, statements like “no online database will […] change the way government works” are objectively hilarious in an age of cold wars waged by authoritarians with creepy databases of dissidents and religious minorities, squabbling over free hearts and minds on the internet. This serves as a reminder that, no matter how confidently someone’s opinion sounds, it can still be wrong.

Anyway, it seems obvious to me that, if Apple released the iPhone 384986589268, adding a regression to make its OS look like its very first, the public would laugh Apple all the way to a flatlined stock price. Consumers have higher standards today — when considering making digital products as a business today, the high barrier to entry that’s caused by these higher standards is real, and it would show up in even surface-level market research.

A screenshot of the original iOS interface.
Be honest: you hate this. Credit.

This is all to say that most customers do care about aesthetics, at least in some capacity. This preference can be stronger across different products and markets, and aesthetics, at times, can become differential market advantages.

For example, the aesthetics in personal fashion: most consumers are very, very sensitive to this. There are brands that exist to sell non-aesthetic clothes, but these are not widespread. Most consumers are very particular, and enjoy trendy aesthetics.

Or consider the aesthetics of furniture and homes, where consumers always prefer beautiful and professional looking furniture, rather than cheap, ugly furniture. Consumers also go out of their way to spend a ton of money with contractors to install prettier kitchen cabinets, and steel appliances — all because they look better.

Human beings do have aesthetic preferences — we are a species intelligent enough to develop these kinds of advanced preferences and tastes. When trying to sell stuff on a market, people who conduct research about the probability that their proposed product will succeed do have to determine how sensitive their target market will be to unprofessional looking work, and what they can get away with. In some markets, however, that margin of error is razor-thin (see: Apple’s market).

“Dribbblization of design,” and other things

Another thing I notice is that the view that visual design is the stuff of inferior, simpleminded, drooling Dribbblers is rife in design programs (e.g., bootcamps and universities). It goes without saying that these programs, themselves, are recent phenomena (I will make no commentary on the irony of this).

As some of you know, I’ve mentored maybe what’s probably around eighty people, all of them from these programs (with the exception of just one), and I have seen not a single example of a student from one with a portfolio that has acceptable product thinking — to say nothing of their visual design. With no exception, I’ve heard from them that what they are taught about product design is that they “do research” and “UX.” That “UI” is a “separate job.”

Right, except that this is a severe distortion, and this is not how the market operates today. For expedience, I’ll quote myself in Things you might want to avoid as an early career designer:

In reality, you do not get to choose whether you are a “UX” or “UI” designer in today’s market. This is an academic discussion about a market that no longer exists — a luxury belief. The reality is that you are going to experience increased difficulty finding — and retaining — customers as a designer if you do not make professional looking work. When FAANG companies hire designers under titles like “UX designer,” or “product designer,” or “interaction designer,” these are all the same jobs: you are planning out how some software will work. But there is an additional expectation — not always explicitly stated — that you will put in the effort to make the product or feature look professional. There will not be an attendant visual designer at your beck-and-call to jazz up your schematics. It’s just you.

I have the uncomfortable job of telling these hopefuls that, not only does their portfolio site (and its content) look unprofessional, — causing opportunity loss for them on the market — but that their work also demonstrates no understanding of how businesses, products, services, and software fundamentally work. Unfortunately, I see portfolios from these programs filled to the brim with infantile, implausible software. To quote myself in You don’t need mentors to tweak your portfolio — you need to start over:

By “serious case studies,” I mean you need plausible case studies. Those prompts you got in your program about making a snack delivery service for your favorite brand of snacks, or a food delivery app for your favorite restaurant: these are not plausible. We already have a food delivery app: it’s called Uber Eats. You don’t need to revisit that. So, for example, these are prompts from one of those programs:

- Design a game preview app for an arcade.
You and I both know that no one will ever need or want this.

- Design a food delivery app for a bakery.
Uber Eats.

- Design a delivery tracking app for a sushi restaurant.
Uber Eats.

- Design a flower catalogue app for a florist.
Squarespace.

- Design an order tracking app for a trendy florist.
Shopify.

- Design a menu and payment app for a beachside snack shop.
Physical menu & Square.

Do not use these. These are easily solved with already available, off the shelf solutions. Try something realistic. You can add a feature into an existing product … Or you can improve the navigation of a certain, famous social media app. … Or you can also go big: could LinkedIn have a mentor and mentee matching service in their product? …

Regardless of your choice …, make sure it’s realistic. By this, I mean that it passes the “reasonable person test…”

Further, to quote myself from Is “Dribbblization” really that bad, though?, I write:

For all the talk about how unimportant visual design is, particularly from product design programs, I would instead expect to see graduates coming out as incredible entrepreneurs, building unforgettable products and features that people want to bang their doors down to buy off them, even if they look unprofessional. But I don’t see that. I see people spending thousands to come out with not only poor visual design, but also implausible case studies — ones that show not even a cursory understanding of markets and products, and then talking down on people who have the interest, humility, and grit to put in the work to at least make something excellent from a visual perspective.

I would expect that, for all the time not invested in to craft and visual excellence, designers would be highly technically inclined, or making incredible products. But I don’t see that. I find this to be a huge disservice to hopefuls, who frequently cannot see what I see in their uncompetitive work.

All right, so what’s really going on?

Some have asked me why I believe this amount of vitriol for visual designers exists. I think every designer’s answer will be different but, at times, I wonder if this isn’t a result of a few possibilities:

  • A lack of technical knowledge about and appreciation for graphics standards (and related technologies),
  • General human tribalist tendencies — “I’m smart, you’re not, I work on hard, valuable stuff, you work on the easy, frivolous stuff,”
  • Conformity, especially if one’s coming from a bootcamp — anyone who’s anyone feels pressure to agree with these mainstream views, and,
  • If I’m honest: resentment caused by jealousy — probably sourcing from insecurity that others have put in the effort to create visual excellence and reap benefits for it on the market, while they haven’t and don’t,
  • Potentially, some feelings of anger towards engineers, due to being higher value in production of software.

Or, perhaps, it’s some combination of these things. My experience has been that designers routinely write off the difficulty that is required to create great visuals — and not even just in terms of design, but rather in engineering. For example, from the same article above, I write:

Building beautiful products takes an enormous amount of skilled, technical execution from engineers. Consider that entire game engines are written and rewritten just to improve things like collision to create reasonable behaviors when objects collide; physics are replicated with painstaking dedication in game engines; algorithms are thought and rethought — and rethought again — just to create plausible visual randomness in assets; asset generation and management is enormously costly, and some people are hired just to make technical pipelines for artists to create convincing grass textures on the floors of games; writing shaders that are performant across all kinds of devices to access internet products (e.g., browsers) requires lots of dedicated, low level graphics programming, too; and so on. This is extremely admirable and takes such an incredible amount of technical experience — and raw grit and patience. […]

As organizations that maintain important developer tooling and services notice the way that aesthetics are pushed and consumer expectations evolve, they work hard to create standards that accommodate all these changes. This improves usability and performance across internet services more generally. This means that, as technologies that are used on the internet improve, the products and services that use those technologies also improve — the resultant effect is that more is possible with less (e.g., less strain on our devices and networks). As demand for more complex assets on the internet grows, technology standards we all rely on respond — and become more resilient. We all win in this equation.

My experience is that the dismissal of visual excellence in software likely (at least partially) occurs because designers do not know enough about how graphics work in order to see its value, and how important it is to technology that our technological standards that underpin the internet (and native platforms) improve over time, and how visual excellence pushes on these boundaries — and the benefits this confers to every day individuals.

I also think that many designers do not appreciate how hard it is to make something look beautiful. I see this the most in the people I mentor — when we start our relationship, they frequently speak disdainfully about visual design, which is consistent with what they were taught. I would expect that someone who speaks disdainfully of a topic would have excellence and experience in that area — but they never do. As mentioned, I’ve yet to see one single individual from a design program that has acceptable visuals. I find this to be puzzling. As our relationship develops, they become less haughty and disdainful of visual excellence, and start putting in the actual effort to build up that skillset. Unequivocally, they all seem to indicate that it is the absolute hardest one for them to acquire control over.

This is because visual design is hard. It is not a clear and direct process of reasoning, as interaction design is. Founders routinely explain to venture capitalists in under an hour what their product is, and they have little time to sell them on it. It is not difficult to explain why some software would work, why there’s market demand for it, and why someone should invest in it. It is, however, very difficult to explain why something is beautiful to humans.

There is a way to create a very strong case for why a product will solve a market demand, and become a viable business. One can source evidence in analytics (if an MVP is out), or discuss market trends, talk about market size and opporunity, and other information. There is no “evidence” to pull about why something is beautiful. There’s no reasoning about why this strange orange looks beautiful on this eggshell white. It just does. This is what makes visual design hard. It’s all about getting reps in — not following sound reasoning. It’s about developing an adult taste for strange food. It just takes time. This is why mentees demonstrate incredible progress, week over week, when it comes to making software and reasoning about it — but that the progress is painfully slow on developing their taste.

Interestingly, I notice a real absence in retaliation among those who are skilled in visual excellence. Whenever I inquire about this topic, skilled visual designers do notice this phenomenon I’ve described in this opinion, but also seem to shrug, indicating that they feel these sentiments are just hot air and hysteria. I also commonly notice in my discussions with them sentiments like “visual design for interfaces is easy,” which might catch less skilled designers off-guard.

But it’s hard to disagree — making visuals for interfaces is among the easiest kind of visual design to do. It makes the amount of noise about how dangerous visual excellence is, how awful its effects are on the industry (and so on ad nauseam), seem almost out of step with the dead-simple reality of making rectangles look good.

Certainly, interfaces are intended to cut all extraneous content and features from views and be as focused as possible so, necessarily, visual design for these surfaces tends to be very easy — simple rows and headings, sometimes columns if the product calls for it. No complex layout work (e.g., nothing overlaps, no fractional layouts, etc.); colors and type is usually conventional; lots of white space that uses the same spacing values over and over; no treating of imagery (e.g., via shaders); content is always dynamic, so content has to go untreated; etc. Nonetheless, unskilled designers still mishandle this, which is all too easy to do.

It is no wonder, then, that most skilled visual designers confess to finding those surfaces to be the least interesting to work on, from the perspective of visual work.

I’m not sure how to end this opinion, sorry

From what I can tell, every year is software’s best year. By every measure, digital (and even physical) interfaces significantly improve, year over year. I would need a lot of evidence to convince me of the opposite: that my youth filled with garbage software was better, actually — but I’m all ears.

For my part, my quality of life has improved substantially due to improvements in software. As is the case for other consumers, I have low tolerance for nonsense from software. I want things to “just work,” to borrow Apple’s branding. I know you all do, too. Where I’m told that “the UX” of software has worsened as “the UI” has improved, I see the opposite: as digital products have matured, they have worked better, faster, and given me what I want more efficiently — all while leaving its Geocities-looking past further and further in the distance. I think there is a real and prescient case to be made that there is, in fact, an extremely strong correlation between improvements to visuals and to functionality, both.

I’ve noticed this among the designers I most respect: they seem to be good at everything — they are skilled developers, visual designers, and make awesome products, and seem to be able to solve every brain-busting interaction you throw at them. They can think of everything. They can solve anything. They just know everything. From what I’ve seen, it is possible to “do it all” — provided one has the discipline for it. Chatting with those designers, they say the same. It’s probably just the can-do attitude that does it. Rather than self-limit and believe that “no one can do it all,” that “companies expect too much,” and that they are “looking to decrease our wages by hiring one person for everything,” they seem to say “challenge accepted.” I’m happy with that.

For my part, if my very online childhood was the golden age of software, created by a glorious time when only “UX” design and “research” dominated — if the outcome of this is the kind of software I grew up with, rather than the software I have today, then I’m okay with that paradise being lost. Regardless, it makes me wonder if perhaps neither “UX design” nor “research” return the results they’re said to, after all.

--

--