Plum Styler Journey

Birth of a 3D Inspiration Tool at the Service of Design

Xavier Ayme
Plum-living
4 min readMay 5, 2023

--

Plum was born in Paris in 2020 to allow everyone to live in a place they have created. Our mission is to inspire our clients, guide them and offer them the best products to create the interior they dream of. On our website, you can find countless facades, accessories, and paints to personalize an Ikea kitchen or dressing room. But what people don’t always see at a single glance is the cutting-edge tech that we built to better inspire & serve our clients.

This kitchen does not exist (yet!)

From a Client Need to a Tech Problem

Defining ourselves as an inspirational brand sets the bar high in terms of qualitative visuals. We needed to display high-quality images in line with our brand positioning to help interior designers & clients project themselves in the interior they are designing.

We wanted our clients to be able to visualize their kitchen, bathroom, or dressing if they combined one front color and shape with a specific handle. But with the number of colors, designs, and finishes Plum offers combined with all accessories, the possible combinations were extremely numerous 🤯, which made it quite difficult for our clients to project themselves.

So we started wondering, how could technology help our clients better visualize their design? We started dreaming of a 3D-based tool that could display a photorealistic visualization of the client’s inspiration (model, shape, color, and accessories). We could not imagine embedding the inspiration into their own environment, this was out of our technical-reach at the time, but at least into various preset layouts close to actual customer projects.

But before reinventing the wheel, let’s try to see what was available on the market.

Assessing Two Main Technologies

We quickly identified two possibilities: real-time 3D and precomputed 3D. Can either of them tackle our needs?

Real-time 3D for web browser

Real-time 3D refers to numerous libraries and techniques — you may have heard of WebGL, babylon.js, or three.js for example. In practice, the client’s browser handles the 3D digital model, and its rendering is performed by the computer that “displays the 3D”.

Note: If you are not familiar with the term “rendering”, the idea is to create a picture similar to what the human eye or a camera sees from the interaction between a 3D model (the real world) and billions of light rays emitted by light sources — like the sun or a lamp. Their reflection and diffusion on the 3D model and its surroundings are computed. The process is called “rendering”, and the output is a 2D image.

Real-time 3D offers total flexibility on the 3D model and the number of light sources interacting with it. However, to be fluid for the customers, the rendering would need to happen in less than a fraction of a second, otherwise, the user would feel a “lag”. But even with the best laptop available on the market, it limits the degree of photorealism that can be achieved: the computer basically does not have enough time and computation power to emulate enough light rays and reflections. This is a trade-off gamers are well aware of: remember how video games 🕹looked like 10 years ago? And 20 years ago?

Said in a simple way, and leaving aside technical progress made in this science, photorealism is a function of two simple parts: the computation power available (a.k.a. the computer speed) and the time it takes for rendering.

Unfortunately, in 2023, despite the huge progress made every year by the manufacturers (cf. Moore’s Law), an average computer cannot achieve photorealism in near real-time. It may be the case in a couple of years, but it is not yet possible.

This was a dealbreaker for us. The other option we had in hand was precomputed 3D.

Precomputed 3D

With precomputed 3D, all images are precomputed on a rendering server and stored in the cloud. Coming back to my previous assumption of the computation power and the rendering time necessary to achieve photorealism:

  • we can use dedicated made-for-purpose servers (so computation power is increased significantly)
  • and we have plenty of time! (let’s say 24 hours, compared to a fraction of a second)

Under those favorable conditions, we can definitely achieve photorealism and render a picture that cannot be differentiated from a photograph.

This plant is a precomputed 3D image of a plant. Would you have known it?

How could we implement it? Whenever a client selects a configuration, the website calls the corresponding picture and displays it on the client’s screen. Easy life, quite a simple web service to develop!

But in our simple demo kitchen, for a single front, with 30 colors available, 4 designs and 5 finishes, a split between high and low blocks, and a selection of accessories like handles, taps, plinths, this gives us about … 100.000 billion possible combinations and therefore 100.000 images to generate and store.

Our demo Kitchen and its possible variations

Is it feasible? Obviously not:

  • Each image has an average size of ~ 100 ko
  • 100 ko x 100.000 billion images leads to 10.000 billions of Mo, so 10 Exabytes of data,
  • So basically, to store all the combination images from our simple kitchen configuration, we would need… a whole Datacenter and a few hundred million euros to render them.

This goes far beyond Plum’s purchasing power 💰😝.

In this series of articles, I’ll share our journey to create the Plum styler — behind the scene.

--

--