Although defining the design space is a crucial step of the generative design process, it is only half of the problem description that the optimization algorithm needs to navigate the design space and find optimal solutions. The other half is the specification of objective and constraint functions that tell the algorithm how to evaluate each design. Later we will look in more detail at the difference between objectives and constraints and how they describe the goal of the optimization problem in different ways, but for now we will treat them both as measures of design performance, and discuss some general points about measuring our designs.
Why do we measure?
In short, we measure our designs for three reasons:
- We want to compare designs objectively (apples to apples)
- We want to evaluate designs based on non-intuitive measures
- We want to be able to explore more designs than is possible to visually inspect
The first two points are practical concerns, which may also be applicable to design practices outside of generative design. Even within a traditional design process, we might be interested in having strict numerical measures to make sure that we are evaluating various designs on an even field. For example, we might generate a basic budget for each design iteration to make sure we are not biased towards any particular design. We might also be interested in measuring things which are not easily perceived or are outside of our intuitive grasp. For example we can use simulation tools to analyze the structural performance of the building or the total amount of daylight that enters the building through the year — both things that are hard to tell simply by looking at a design.
The third point, however, is more specific to the generative design method. In fact, it could be considered one of the fundamental principles of generative design, which is that
we want to be able to learn from more designs than it is physically possible to generate or evaluate “by hand”.
With traditional parametric design, it is common practice for a designer to tweak all the different parameters of the model manually, and use their intuition to guide them to a favorable design. In this case there is no real need for specific measures of performance (although such measures might be used to help in the decision making process).
The generative design method, however, is concerned with using artificial intelligence to search through the space of possible designs both automatically and intelligently, with the hope of discovering design options that might escape the human designer’s intuition. Therefore we don’t want the computer to just start making a bunch of random designs — we want it to build some knowledge about how the design space works and where to find good designs. The problem, however, is that the computer has no intuition at all — it cannot reason about design the way we can. Therefore, the computer can only explore the design space based on strict numeric measures that can be deterministically computed from the model.
How do measures help us learn?
In principle, the search algorithm learns stochastically, by sampling various designs, measuring their individual performance, and then abstracting that knowledge to all other designs in the landscape. Let’s consider the box example again. Let’s say we wanted to find the box with the largest volume, and asked the search algorithm to find it for us. In the beginning, the algorithm has no concept of the design space other than the bounds of each dimension, and has no idea of how the individual parameters are related to our goals. So at first it samples various boxes from the design space randomly, keeping a record of the volumes of each one.
Assuming that the design space is continuous, the algorithm starts to build a mental model of how the goal we’re interested in relates to the position of the various options within the design space. Using this rough mental model, the algorithm can predict where better designs might occur, tests those as well, and use the results to adjust it’s model. If our box model had two input parameters (length and width), we could visualize the algorithm’s ‘mental model’ as a two dimensional surface connecting points whose x and y dimensions represent the input parameters, and whose z height represents the volume of the box.
We can then imagine the algorithm’s learning process as trying to fit this surface to the data it gets from designs as well as possible. Although it is harder to visualize, this concept can be extended to high-dimensional spaces as well, and we can image that for each goal the algorithm is trying to fit some high-dimensional hyper-surface to the data it’s getting from each design it tests. We call this hypothetical surface a response surface, which measures how the value of each of the design’s goals ‘responds’ to changes in the input parameters.
How do we choose the measures?
As with the parametrization of the design space, there are no strict rules about which measures the designer should choose, or how they should be implemented. Therefore, the description of measures is part of the design problem, along with the definition of input parameters. If the parameters were a set of ‘knobs’ which could be tweaked by the algorithm, the measures are a set of ‘gauges’ that report to the algorithm how well each design is performing:
While there are no clear rules for how measures should be chosen, they should encode as much as possible everything that is important to us about a design problem, since they will be the only thing guiding the algorithm through its search. At this point you might spot a problem: if the design measures need to encode everything that is important about the design problem, but they also need to be numerical and deterministically computed from the model, doesn’t this assume that everything that is important to the design problem is able to be numerically computed?
Indeed, this is a big concern with generative design, since clearly not everything that is of value in a design can be reduced to a series of numbers, especially subjective values such as beauty, style, or individual preference. If we try to categorize what can be measured about an architectural design, we might come up with three categories:
- Simple values such as floor area or height which can easily be quantified but are also easy to understand intuitively.
- Complex values that can theoretically be quantified but are difficult to intuit or compute. This may include simulation-based measures such as structural simulation or fluid dynamics, or occupant-level agent behavior such as crowding or routing. These are things that even experts can’t easily gauge just by looking at a design, so including them in the set of measures is very likely to generate some unexpected discoveries.
- Non-measurable values such as personal preference.
In principle, generative design can only deal with the first two categories, and we may never be able to teach a computer to judge designs based on their beauty or style. Most designers, however, would agree that there is sufficient opportunities in the first two to forgive this limitation.
The complex measures found in the second category above are often called “computer simulations”, since they use computation to simulate the effects of complex forces occurring in the physical world. These simulation techniques can generally be broken down into two categories based on how they are calculated in the computer:
Static methods are those which can be calculated deterministically in a single step. These are typically faster to calculate, and less computationally expensive, so they are great for generative design applications where you often need to evaluate hundreds if not thousands of individual designs before finding a set of optimal solutions. Since they rely on basic calculations, they can often be computed directly with basic formulas or available through simple plugins for Grasshopper, and can thus be easily integrated within our automated generative design workflow.
Finite element analysis (FEA) describes the object through a set of small, discrete elements, and then uses those components to compute some measure on the object. This method is typical in structural analysis for calculating how forces flow through an object and identifying areas of high stress concentration. FEA-based structural analysis can be computed using 1-d (line), 2-d (surface mesh), or 3-d (volume mesh) finite elements depending on the type of simulation.
Element-based static analysis can also be used to calculate environmental measures, such as shading and daylighting. In this case, the elements are typically 2-d surface mesh faces which compute how much light hits them over the course of some period of time.
Ray-based methods calculate measures based on rays projected from sources. One example of a ray-based method is view analysis, which is typically calculated using an isovist.
Graph-based methods use graphs or network structures to calculate measures about a space. Graphs can be used to calculate things such as routing, travel distances, adjacency, or clustering. Graph analysis is a dominant tool in a set of techniques for spatial analysis called Space Syntax.
Dynamic methods are used for measures that cannot be calculated all at once, but rely on many steps of sequential calculation to compute. They typically work by creating an environment of ‘agents’ and then simulating how they interact with each other and their environment until they reach an equilibrium (also called convergence).
Because they require a sequence of steps, these measures typically take longer to calculate, and are more computationally expensive. Due to their complexity, such simulations also usually require specialized standalone software, and can thus be difficult to incorporate into an automated generative design workflow. For these reasons they are usually difficult to use with generative design, although there has been some interesting recent research in using machine learning to model the results of complex dynamic simulations so that they can be run faster within a generative design framework.
Physics-based solvers calculate the equilibrium states for a series of elements with dynamic properties relative to forces operating within their environment. These methods are typically used for techniques of ‘form finding’ or ‘relaxation’, where a form is modeled as a series of elastic members which are subject to internal and external forces, and then the form is allowed to move until the forces in all members are equalized. This tends to result in a form which is more efficient at distributing structural loads.
Computational fluid dynamics (CFD) calculates the movement of fluids through a 3d environment. This type of analysis is very difficult and computationally expensive, since fluid dynamics are governed by complex dynamic behaviors, and must be calculated step by step over some period of time until convergence. The properties of the fluid can be calibrated so the same method can be used to study air flow, water currents, and the dissipation of heat.
Crowd simulation computes the behavior of human agents in space over a period of time. Similar to other dynamic methods, it works by populating the space with autonomous agents who are programmed with behaviors that mimic how humans perceive and navigate within a space. Such simulations can be extremely useful for architectural and urban design applications, particularly for studying large public spaces such as shopping malls, airports, and public squares.
These simulation methods provide ways of measuring how a design performs according to complex real-world conditions. Not only can these measures be used to understand individual designs, but they can also be used to guide an optimization algorithm in automatically searching a space of possible designs in order to find the best performing options.