AI visual design is already here — and it won’t hesitate to take over your petty design job

Rexroth K.Xu
6 min readNov 16, 2017

--

$25.3 billion in sales — Alibaba Group’s biggest annual online shopping festival, Single’s Day Event, has racked up more sales than its American counterpart from last year — to put it in perspective, that number doubles 2016 Black Friday and Cyber Monday combined.

The shopatravaganza started generating hypes days before the actual 11.11 sales date. Taobao and Tmall — Alibaba’s main e-commerce platforms — host millions of online small businesses and brand stores. Alibaba designers need to work around the clock to produce tons of banners, UIs, and visual deliverables on heavy demand for the two platforms — that is, probably until now.

Introducing Alibaba LuBan (鲁班), an AI platform named after a legendary ancient Chinese engineer, and is capable of generating approximately 8,000 different banner designs per second.

Different banners appearing on different instances of Taobao app — at the same time

LuBan was already in beta testing during last year’s Single’s Day Event, and it generated over 170 million banners during that time. No one ever suspected that all the colorful designs in 2016 were done by AI instead of human — in a sense, Alibaba secretly conducted a successful Turing test with millions of online shoppers.

But LuBan did not just stop there. During this year’s Single’s Day Event, LuBan designed a whooping 400 million banners for an even wider range of products. If we assume it takes a human designer 20 minutes to design one single banner, then we will need 100 designers to work non-stop for 150 years to produce the same amount.

LuBan is powered by machine learning, and is trained on millions of sets of design data.

According to Yue Cheng, the head of Alibaba LuBan AI Design project, the machine learning algorithm LuBan employs consists of four major steps:

Step 1: To let machine understand what is the components of design. Design team manually assigns labels to the original layers in the sourced design documents while summarizing different design styles for machine to understand.

Step 2: To establish design element center.
After machine learns the general design frameworks, it needs a huge amount of design data to train. The team first establishes a library of design elements, then lets machine extract features from the raw images, followed by clustering these features, while inspecting the overall library quality and if there are copyright issues.

Step 3: To let machine generate designs.
Similar to AlphaGO, the algorithm produces a virtual canvas like a Go board, then places design elements onto the board. The team employs reinforcement learning here; the machine first does some random designs, gets some meaningful feedback, then keeps iterating the whole process until it learns what kind of design is “good” or “preferred”.

Step 4: To evaluate machine-generated designs.
Machine generated designs are evaluated from the perspective of both “aesthetics” and “commercial values”. The aesthetics valuation is done by professional agencies, while business evaluations are reflected by indicators like click-through rate (CTR).

The rule of thumb is: any designs that can be handled with fixed patterns can probably be handled by AI too — and AI will do it better and faster than humans do. For example, a banner can be dissected into 5 basic sub-elements: copy, product shot, background, logo and decorating artifacts. All banners can be seen as the permutations of these sub-elements, and it becomes only a matter of arranging these elements in such a way that the banner becomes visually appealing as a whole.

Not just banners; UI, and most likely UX design too

Some may think that comparing to a junior level design work such as banner design, more complex tasks, such as UI design, will be harder for AI to process. To some extent this is true; different digital products have their own unique purposes, and a general-purposed AI is still not here yet. However, the underlying idea is the same: all UIs can be seen as the permutation of some basic sub-elements. Moreover, people are downloading new apps less and less; most app usage is centralized around a handful of super-apps — Facebook, Twitter, Instagram, WhatsApp, Weibo or WeChat — and even these apps are becoming more similar in terms of functionality, visual design, and underlying user experience framework. Since these frameworks are heavily tested and proven to work well, and people have been trained to use them, imitating their designs will dramatically minimize learning cost. Therefore, a good strategy for new app design is to replicate the existing frameworks then beautify it with an appealing visual façade.

App UI designs are becoming more generic, UX-wise.

Suddenly, we don’t even need a general-purposed design AI anymore; we just need one that understands the current design trends well enough to wipe out some junior to intermediate UX design jobs too.

Keep building on this thought, it is not too far-fetched that some junior to intermediate front-end development jobs are also threatened, because the patterns in UX design also have similar corresponding patterns in terms of front-end construction; and these “patterns” can theoretically also be learned by a well-trained AI.

All is not lost

If we view the introduction of first generation iPhone as the beginning of mobile revolution, the design field has enjoyed a burgeoning market trend for more than 10 years now. Today the design field is more lively than ever: new visual style guides are introduced everyday on sites like Dribbble and Behance, thousands of unique apps in the app store, and a plethora of new design software other than the good old Adobe Suite. Even before LuBan, we designers already had that one question in mind — “what would the future of design be?”

LuBan generating new banners based on selected design style

One thing is certain: gone are the old days a single designer is hired full-time to only design marketing posters. With the aid of modern design tools like Sketch, Figma or Framer, and now new challenges from designer AI like LuBan, it is not even adequate for us only have one good trick up our sleeve. We must reinvent ourselves on a higher level — to understand the business as a whole, to empathize with all stakeholders from the start to end, and to have our visions not only tunneled to a single design task at hand. We have all heard of the terms like “Full-stack designer” or “Service designer”, and some of us probably have aspired to become one, but sometimes we are bogged down by 20 artboards that are due before the milestone meeting on next Tuesday. Don’t get me wrong, those milestone meetings are important — because AI can’t yet (fully) process client’s verbal cues, and they can’t yet (fully) understand a subtle change in sentiment in client’s voice, and that’s where the human factor comes in. We can interpret humanly need with humanly empathy, and at the same time, we must strive for more comprehensive understanding of, and more involvement with the service, while keep sharpening our basic design skills. It seems to be an uncertain time to be a designer (for any professions, really), but it is also the most rewarding opportunity that’s ever presented to us. After all, in the era when a company can expense a DesignBot-3000 to produce 100 app concepts in 1 second, it would still take a savvy human designer (hopefully) who deeply understands the business to choose which concept is the most fitting for the project.

--

--

Rexroth K.Xu

Staff UX Designer, Alibaba Group | Cainiao Smart Logistics