Physical Design + Machine Learning

How could the world of physical design benefit from techniques introduced by machine learning?

Wes Thomas
Backspace
5 min readNov 7, 2018

--

TThe proliferation of machine learning tools has given businesses and hobbyists from a wide array of industries a means to put their data to work. Dusty old database tables that once provided surface-level insights via algorithms only as smart as your best data scientist can now be restructured into brain-like prediction and decision-making models.

An informal industry-wide agreement to share high-quality open source tools and education material, combined with readily accessible proprietary cloud platforms, has driven the growth of interest and action around machine learning. The greatest beneficiaries are those who directly work with or collect large amounts of data which fit existing models and training sets, or those who seek insights that can be uncovered through more unstructured approaches.

Often they operate in information-centric industries — B2B startups, big name consumer-oriented tech companies, finance, telecom, the scientific community. But what about professionals who operate more in the physical design world? Or in environments that may be heavily reliant on technology but haven’t yet developed basic sensibilities around harnessing data using contemporary methods — designers, architects, engineers, artists.

Models from the Thingiverse Thingi10K 3D dataset, used by researchers in a variety of machine learning and computer vision applications.

If architects and other physical designers could restructure the information generated by their work, or incorporate external neural models into their creative process, what kind of questions would they want answered? What kind of predictions would influence creative decision making and design iteration? Where along the spectrum of translucent collaborator to opaque black box would artists and engineers prefer to situate the technology?

Generative Techniques

For quite a while now, designers have utilized automation and generative techniques to augment the design process. Clever algorithms can present a world of iterative possibility, and can drive what would normally be intensive, manual processes like form finding, parameterizing physical constraints, determining buildability, and creating construction specifications.

In generative design, the author’s own creativity still plays an important role, and a designer can easily directly engage in the design of such algorithms. But often the focus is on constrained generation from scratch. Inputs like human needs, material constraints, budget, and aesthetic parameters aren’t necessarily passing through a tangled machine brain, but through one-off algorithms that seek to automate tiresome processes within a design space. You can learn from and be surprised by your generative algorithm, but it is often not learning over time — its intelligence more or less flatlines.

A few examples from the Princeton ModelNet, a categorized collection of 127K+ everyday objects.

Machine Learning Powered Form Finding

One distinction between classic generative physical design and design driven by machine learning could be design output that gets smarter over time with better data, as opposed to design output that improves solely through incrementally better design heuristics. Data representing past design solutions could contribute to the formation of a neural net that in turn improves future work. As an example, when focused on the realm of physical optimization, deep learning methods can support performance criteria in a design process:

With a foundation in statistics, machine learning techniques are clearly well suited to answer questions like “which design best meets these concrete criteria given this neural model and set of inputs?” What about answering questions that are less objective and focused more on aesthetics? When targeting form exploration, neural nets can blend examples in interesting ways, pointing the designer in a direction they may not have considered:

When considering both performance-based and aesthetic advances in machine learning, we can piece together one version of the future where our design tools and solution spaces grow with us, creating a feedback cycle centered around the restructuring of the data behind design. Formally codifying design knowledge for use with neural networks could enable next-generation tools that change with every input, every past success and failure.

Will future architects and product designers mentor or collaborate with their modeling software, shaping it over time into a fractal of their creative sensibilities? Will the Shanghai office wake up to slightly better neural models while Los Angeles trains them through the night? How could rigid heuristics and one-off generative algorithms transform into a self-organizing background intelligence?

It’s not too early to start imagining what the future has in store for the creative mobilization of today’s design data.

Ultimately, the integration of machine learning techniques into the physical design professions will come from the curiosity of designers combined with fruitful collaboration with computer scientists and other researchers. Certain design professions are already amassing relatively well structured and pre-classified design data — think BIM for architects, and engineering-focused suites like SolidWorks or CATIA. It’s not too early to start imagining what the future has in store for the creative mobilization of today’s design data.

Wes Thomas is a creative technologist and architectural designer at Sosolimited.

--

--