Bridging Data Science and Architectural Practice

KPF Urban Interface
14 min readJan 18, 2022

--

Emergent applications of cloud computing in architectural practice and academia

Rhino Compute Workflows

David Andres Leon — Head of Computational Design IAAC

Brandon Pachuca — Urban Data Scientist + Web Developer

Architecture and computer science are becoming increasingly intertwined. The data science industry found a promising niche to leverage big data, machine learning, and web development to create viable products to automate workflows and bring new insights to market. Architecture can benefit from these trends, adapting product and web development to support advanced simulation and visualization practices.

At first, this phenomenon invited a few daring architects to step into the realm of computer programming, shifting the architecture industry’s long traditional curriculum into a hybrid profile of a designer with programming skills: the so-called computational designer. At the heart of this esoteric trade is an attempt to shift the focus away from merely designing buildings toward designing tools utilized for their conception.

Computer scientists stepped into specialized departments of architectural firms to develop automations that enhance the design practice, creating tools to analyze building performance and craft user-friendly interfaces. These allow designers to make better informed decisions in the creative process and unveil aspects of design that are often hidden beyond two-dimensional plans.

Among other software, McNeel’s Rhinoceros has become a key player in this interdisciplinary venture. First, through the Grasshopper plugin, the software allows easy access to programming through a visual interface, lowering the entry barrier for non-experts and providing relatively simple access to a vast ecosystem of plugins created by a thriving community of developers. The plugins, most of them free to use, encompass a wide range of know-how from different trades, formerly siloed in various other software.

Secondly, and perhaps more importantly, by providing well documented access to its development tools, McNeel has allowed for the software to become highly customizable, not only by exposing most of its functionality through their cross-platform SDK Rhinocommon, but also allowing for innovative workflows like Rhino.Inside, which allows for Rhino to be injected into other software.

One of the most interesting recent developments in the Rhino ecosystem is the Rhino Compute project. Built on top of the seventh version of the software, it provides access to most of the features of Rhino through SDK functions via a REST API, allowing the software to run as a service for geometry calculations, either locally on a client computer or on a cloud-based Windows server. Simply put, Rhino Compute allows for alternative applications to harness the power of Rhino and Grasshopper through a programing interface, without the need to have Rhino installed.

Rhino Compute as a Cloud web service

Rhino Compute helps remove a barrier to entry by abstracting complex workflows from both typical architectural designers while increasing extensibility for computational designers. Used as a geometry solver back-end, it enables the creation of user-friendly front-end applications accessible via a web browser, enabling architects to execute complex parametric workflows with minimal technical entry requirements.

Moreover, through the recent inclusion of the Hops component into Grasshopper in Rhino 7, Rhino Compute can be used to serve as a link between all the plugins that Grasshopper has to offer and any other platform that is able to serve geometry by speaking the language of Rhino, whether that is RhinoCommon or the multi-language Rhino independent open source library rhino3dm (formerly openNurbs), or to a CPython Flask web server.

Such is the case of the workflow described by the McNeel folks in their forum a few months ago, where they showcased the connection between Hops and CPython through the popular Flask framework as middleware, by creating an HTTP server that allows for the transfer of data back and forth between both platforms, both locally and remotely.

This innovative workflow has suddenly opened the door for ubiquitous and powerful tools widely used in CPython for data science, machine learning and complex networks to be used along with Grasshopper: tools such as Numpy, Scipy, TensorFlow, Keras, SciKit, OpenCV, NetworkX, python libraries that formerly could only be used in Grasshopper through other complex workarounds.

In this article we would like to showcase a few of the possible workflows that this new technology can offer to the design practice with two case studies: one from Kohn Pedersen Fox (KPF) architectural practice and the other from an academic perspective, as taught in the Institute for Advanced Architecture of Catalonia (IAAC). Both showcase the early adoption of this technology and how it is contributing to bridge the gap between emergent technologies and the architectural practice.

Compute Workflows

The following workflows enable geometry calculation and solving on either a local or cloud-based Windows Rhino Compute instance. Each workflow leverages a different stack of technologies, which can often be interchanged or even combined depending on the desired workflow. For instance, a specialist/designer might calculate a computationally expensive Rhino command, such as solving a complex facade calculation. Traditionally, if the designer did not know how to use Grasshopper, they would contact a computational specialist in the office to write the workflow, copy the script to their computer, and ensure their local computer had all the required plugins.

Comparison between a Grasshopper script and a Rhino Plugin interface using Rhino Compute

With Rhino Compute, this could now be solved by the specialist writing the workflow and adding it to an office web service available to anyone in the office. Hardware solutions could include bolstering the office on premise or leveraging a cloud server with additional memory or processing power without the need to purchase high-end computers for each individual designer — effectively offloading both the administrative management and hardware requirements to run geometry workloads.

Types of users profiles for the workflow diagram
Rhino Compute workflows

The introduction of cloud services enables an entire office to solve calculations and use web services to both act as an authentication layer to ensure privacy and perform any additional data transformations, perhaps convert to a WebGL format or push to a machine learning data pipeline. The numerous combinations of workflows enable immense flexibility based on available skill sets and cloud platforms.

We will briefly discuss each workflow and provide links to get started.

A + B. Web Server→ Rhino Compute

The specialist team can write geometry-based functionality in Rhino3dm.py and solve calculations using Compute-rhino.py. Flask.py web server enables functionality to be available via a web API endpoint, creating centralization of logic and scripts on the web server, while Rhino Compute can be hosted either locally on a user’s computer through a localhost, or on a cloud-based Windows Virtual Machine. An example of the local workflow can be found on the McNeel Hops developer page. Instead of using a Flask.py web server, Node.js can be used as a web server with Rhino3dm.js and Compute-rhino.js. Integrating WebGL libraries like Three.js enables visualization of geometry and solutions from Rhino Compute, and the Node.js server acts as a communication layer between the client web browser and Rhino Compute.

C. Hops → Rhino Compute

Hops enables a specialist to call remote Grasshopper scripts and functionality into the Grasshopper environment, allowing access to CPython and other Python 3.x+ libraries, such as Scikit-learn for machine learning and Network-x for graph-based solving. Rhino Compute can be accessed either locally or remotely through Hops based on the URL parameter input for the Hops component.

D. Jupyter Notebook → Rhino Compute

The data science field is growing at an exponential rate as datasets on the urban context continue to emerge. Jupyter notebooks are a popular medium to share and discuss data science workflows, from extracting walking distances to extracting information on lot level in New York City. Spatial data can now be moved between a Jupyter Notebook (Python 3.x+) and Rhino interface using Rhino3dm.py. The workflow invites data scientists to interact with Rhino as any other Python library.

E + F. Rhino Compute as a Service

Since Rhino Compute can be used as a back-end web service to solve geometry calculations, the complexity can be abstracted from a designer through an intuitive interface either in the web or through a Rhino plugin. This enables a user to input a series of settings then run a GH script in the cloud — removing local hardware limitations and allowing specialist teams to effectively manage scripts and updates. There is no more need to pass along a GH script, when you can deliver the workflow directly to a user through an interface, where the designer can focus on the design decisions rather than on whether they have the correct GH plugins installed or hardware limitations. GH scripts can be combined with machine learning and front-end web visualization to continuously create additional functionality.

Case Study #1: Rhino Compute in Practice

Collaboration using Rhino Compute

KPF leveraged Rhino Compute as far back as Fall 2019 on a competition project in Vancouver. The design team needed a method to quickly iterate through design options and present to both internal and external teams. Each team, however, placed value on a different metric. This meant the design team needed to test design options against a variety of analyses — views, thermal comfort, shadows and more. The KPFui team created a Rhino plugin connecting the Windows cloud server running Rhino Compute and front-end web interface Scout. The workflow facilitated faster insight for design teams and augmented the capability of a KPF computational designer to work on more projects across the global firm.

Computational efficiency

Centralization

KPFui has many Grasshopper scripts, which in a typical workflow would be sent to a designer to execute on their local computer. This would require the designer to understand Grasshopper and navigate a Grasshopper canvas for settings and outputs. This workflow proved to be too difficult for adoption of tools throughout the office. In the end few teams would actually use the tools provided to them. Instead, the Rhino Compute plugin enabled the Grasshopper script to run on the cloud server with settings exposed to the designer through the plugin interface. This lowered the designer’s barrier to entry to execute complex Grasshopper scripts, creating an iterative workflow when changing design options and testing them against various metrics.

Bentall Centre’s Computational Design model in KPF’s Scout web platform

In 2019, KPF leveraged Rhino Compute to support Hudson Pacific Properties’ repositioning of Bentall Centre in downtown Vancouver. The Bentall Centre project introduced the opportunity to rethink one of Vancouver’s most central urban precincts as the city’s most exciting destination, and as a place that is more than the sum of its individual parts. The site had unused area that was to be placed in relation to four existing towers, built in the late 1960s and early 1970s. It was important that this area was located thoughtfully — that it would not negatively impact sun and view lines from the existing Bentall towers and urbanistically would not take up existing public plazas. The site contained a complex set of constraints, such as transit lines, sloped grading, and protected site line requirements from the city. The design team needed to weigh various tower options for unobstructed views from each tower configuration and direct sunlight to the ground plane in order to best locate a sun-filled plaza. KPF leveraged the Rhino Plugin using Rhino Compute, which enabled the design team to quickly iterate through various design options without the need for a Grasshopper expert. The Grasshopper scripts were stored on the cloud server and solved on the Rhino Compute Windows server, enabling the architects to glean insights rapidly during the design process, thus increasing their ability to make informed design decisions and subsequently present them to the client for approval.

Unobstructed View Bentall Centre analysis to best locate additional FAR on site
direct sunlight analysis on the site
Early draft of KPFui’s Rhino plugin leveraging Rhino Compute in 2019

Case Study #2: Rhino Compute in Academia

IAAC introduced the use of Rhino.Compute to the students of the Master in Advanced Computation for Architecture & Design within the Digital Tools for Cloud-based Data Management seminar of the BIM and Smart Construction module in early 2021. During the course, students learned how to leverage their Grasshopper skills by turning their definitions to functions accessible on the browser with the use of Rhino.Compute. Later on, within the Geometrical Optimization course of the Artificial Intelligence in Architecture module, students were instructed in the use of Hops as a means to interoperate between their machine learning models and Grasshopper. This allowed them to exhilarate their workflows by being able to transform geometry to data and vice-versa on their local machines, eventually allowing for user input in the browser through the use of Compute.

Design Space Exploration with Variational Autoencoders is a thesis project developed within the context of the MaCAD program. The project focuses on the application of machine learning to create a diverse design catalogue accounting for structural performance and the creation of a web-based user interface that enables users to explore these options in early stages of the design process.

The main software environment used is Rhinoceros and Grasshopper, with the Karamba plug-in for structural analysis and a Grasshopper Python script to automate the dataset generation. A Machine Learning model was then trained in Google Collabolatory Notebook using Tensorflow and Keras libraries. The Hops plug-in for Grasshopper was used to link Machine learning libraries in Python and Grasshopper for geometry recreation. Finally, the whole workflow was brought online with Node.js and Rhino Compute through an appserver which enabled hosting the Grasshopper script in a web application.

Computational workflow

The workflow starts by creating a parametric model for a desired geometry which is then analyzed for a specific performance, for instance, structural or environmental. A dataset of samples is created by randomly generating parameters and running analysis for each geometry created by those parameters. Using this dataset, a Variational Autoencoder (VAE) model was then trained. The decoder recreates geometries from the latent space and a performance map was created which enabled exploring the samples generated. By bringing the inputs and results to a web interface, the outcome is a user-friendly platform for interactive design space exploration.

Dataset generation

As a case study, a canopy truss structure composed of a column and a roof was investigated. The column was created with variable radii at different heights and the roof with variable control points and lateral shift. A dataset was then created by randomly generating parameters that describe the geometry and running structural simulation for each sample. The calculations were performed with the Karamba plug-in Grasshopper. Each sample had the same number of elements, and for each, the cross section optimizer found the optimal cross section for each member. The results, such as overall mass of the structure, maximum displacement, and elastic energy, were saved in the database together with parameters describing the geometry of each of the samples.

Exploration of the Latent Space

The collected data was then fed into a conditional Variational Autoencoder, which produces a trained machine learning model of probability distribution over a latent space, steering the generation of samples into a specific area of interest — in this particular case study, structural performance. After the training process, geometries are reconstructed by linking the Python model to Grasshopper using Hops. The Python file containing the model can be run inside Grasshopper through Hops and operated through slider inputs which enable interactive design space exploration within this environment. The output of the Hops component are the parameters generated by the decoder which are used to recreate geometries using the same grasshopper definition that generated the dataset originally.

Web interface for Design Exploration

In order to extend this tool outside of the scope of Rhino and into the browser, the whole workflow is leveraged as a web app that can be used to interact with the samples in the design space in the browser in a user-friendly interface.

Outlook

As showcased in the case studies, the introduction of these technologies help bridge design and computation, which allow for compelling possibilities.

To begin with, Rhino Compute enables design projects to have accessible interfaces that empower users to easily navigate the design space, explore parametric variations and reveal layers of relevant data that otherwise would be challenging, if not impossible, to display in traditional formats. Not only does this render a more comprehensive visualization of the many dimensions that comprise a design project, but it encourages a more participatory design process. Furthermore, it releases this experience from any particular software knowledge by bringing it to what arguably is nowadays the most ubiquitous platform — the web browser.

In parallel, they could also enable the interaction of commonly used software for design (CAD) with other platforms, which can be of great interest for informing the design process. Either by lowering the entry barrier for designers to make use of advanced computational workflows such as machine learning, or by providing them with readymade components created by expert developers, these technologies help expand the vision of the designer and aid in making better informed decisions throughout the design process.

If we consider that the inclusion of these technologies in the design practice can only enhance the view of the designer, and that therefore such an expansion of the architectural trade has a positive impact in the resulting projects, it would seem that architectural practice can greatly benefit from investing in potential employees with these skill sets. The same can be said for academics, by imparting such technologies to endow future professionals with the skills that are currently required to tackle these computational challenges.

However, to implement such technologies comes with a cost that often design practices think twice about investing in: long established design habits must be rethought in order to fit these new workflows and it can take considerable development time for these tools to become widespread and useful. As any other emerging technology, depending on the scale of the project, or the number of projects that can benefit from these tools, the benefit of introducing emergent technologies may not be appealing enough to stop the orchestrated process of design logistics that architecture requires.

Whether that is the case or not, what is true is that computational workflows are evolving rapidly and increasingly being introduced into architectural practice, which invites us to turn our attention back to the fundamental question: What is the role of the designer?

How can the scope of the architect continue to evolve, while allowing other specialties to develop tools that aid the design process? Or should design be pushed to break free from the tools that currently frame it, by endowing designers with the skills required for the creation of its own toolset, in order to allow for more innovative processes?

What is certain is that we are currently in the midst of an enticing conversation, one in which creativity and technology entwine in order to expand our vision of how we conceive the cities and communities of the future.

Acknowledgements

  • Design Space Exploration with Variational Autoencoders is a thesis project by Aleksandra Jastrzebska developed within the context of the Master for Advanced Computation for Architecture and Design and under the supervision of David Andres Leon.
  • Steve Baer, Luis Fraguada, Will Pearson and the McNeel team for their support in the development of these case studies and revision of this article
  • Darina Zlateva — Director at KPF for all the support in beta testing and giving valuable feedback.
  • Kohn Pedersen Fox Associates for the continued support to research new technologies and workflows to improve the design practice.

--

--

KPF Urban Interface

We use data and computation to investigate the foundations of city building and address rising urban challenges.