3# EMERGENT FEATURES: Architectural Geometry
Part Three of My Series on Architectural Form in the Age of Artificial Intelligence
The previously (Part 1 & 2 of this series) described principles of architectural form originate in the idea that meaning is determined contextually rather than according to intrinsic features. In the philosophy of science, this approach is discussed under the terms of “ontic” and “epistemic” structural realism. Structural realism, although it had already existed for years, initially became popular in a philosophical debate regarding quantum theoretic effects in physics. J. H. Poincaré mentions this idea in his 1905 publication “Science and Hypothesis”, which is quoted at the beginning of Part 1. There he posits that we cannot access the intrinsic meaning of objects directly. Hence, we are forced to substitute these objects with images, which can be interpreted as descriptions of the objects. The performance indicators and symbolic interpretations mentioned in the previous paragraph are examples of such descriptions. Poincaré continues to explain the core idea of structural realism, writing that “the true relations between real objects are the only reality we can attain, and the sole condition is that the same relations shall exist between these objects as between the images we are forced to put in their place”. This means that reality is only accessible by means of the topological order of descriptors. While this excludes all the currently-existing intrinsic approaches in computer-aided architectural design, it also presents a way out of this functionalist paradigm. By using statistical methods and emerging descriptors, semantic labels can be approximated. A famous implementation of this idea is the n-gram model in natural language processing, which allows for the automatic translation or correction of text phrases. Another example is in image recognition, where convolution kernels are currently the most successful technology in detecting and recognising objects in pictures. Both implementations focus on merely counting local neighbourhoods and their relations instead of trying to describe the object globally. Moving away from global functions to calculate meaningful descriptions and computing them statistically also defuses the functionalistic drift of computational methods; especially when the statistics are not based solely on single object compositions but instead on large comprehensive datasets of composed objects.

It would seem that J. H. Poincaré anticipated the effectiveness of modern machine learning applications over one hundred years ago. However, the use of object topologies and deriving semantics statistically is of course computationally very expensive. The technical requirements regarding data storage and processing power have only been met recently, long after Poincaré’s death. In this very short time, powerful GPUs and appropriate
software frameworks have already made it possible to outperform all former intrinsic approaches in almost every domain. Over the last few years, this strategy has become increasingly popular in the field of computational geometry.
Theoretically, deriving meaning from context instead of trying to establish intrinsic semantics provides an alternative to the functionalist tradition within architecture. By focusing on frequent neighbourhoods, this strategy is potentially a way to holistically approximate features and culturally constructed meanings of form. Assuming that the assumptions of structural realism also hold for architecture, a topological approach to calculating emerging descriptors could provide a generalised approach for architecture retrieval. Thus, I propose a new paradigm to challenge the credo of functionalism: form follows form.
With this credo, the whole functionalist structure of parametric thinking in computer-aided architectural design can be challenged. It would be possible to create retrieval systems without the need to predefine performance simulations or catalogues of semantic tags. Thus, it could react to the conceptual drift within architecture andserve as a substitute for dynamic typology. Such a retrieval system could then also drive further research in the field of evidence-based design in architecture, which would necessarily challenge the heavily performance-oriented parametric design functionalism. Without the bottleneck of intrinsic performance optimisation, computer-aided architectural design and planning methods in general could move on from the functionalist tradition. However, in order to do so, it is necessary to find a way of processing architecture-specific geometry according to the paradigm of structural realism. This means that emerging clusters of frequent neighbourhoods have to be constructable when machine learning techniques are applied to the dataset. Additionally, the geometric data themselves have to be processed in line with the way experts of the architecture domain have cultivated its description. Thus, certain architecture-specific geometric properties have to be handled natively by such a method. Furthermore, the differences between general geometric objects and architecture specific geometric objects have to be carefully considered.
This question positions the description of geometry as a central issue for the task of non-functionalistic architecture retrieval. The current problem with this approach is the focus on non-architecture-related geometry provided by other fields. This leads to descriptors that do not include the information necessary for subsequent architecture-specific processing. When this information is missing, architectonic phenomena, which mainly describe such information, cannot be correctly derived from the geometric data. This establishes a close relationship between potential semantics and underlying encoding or description. Even the best statistical methods cannot generate predictions on the basis of incomplete input spaces: this always leads to under-determination. Since the description is topologically oriented, the neighbourhood types of the described geometries have to be chosen in respect to an architectonic reception. Nevertheless, currently available methods do not address this issue, and focus on more general descriptions instead. In particular, the key audiences in the gaming and animation industries are mainly invested in descriptors optimised to deal with more organic geometries like animals and plants or more complex geometric composites like means of transport or weaponry. Architectural geometry is far less complex than these. Due to underlying production processes, these architecture specific geometries are composed out of fewer edges and larger planes. In addition to the smaller face-count of such polygon meshes, a significant number of the angles between these faces are multitudes of ninety degrees and many length ratios are in accordance to certain design rules. Taking these effects into account, hypothetically, the density of available datasets can be compressed and the statistical methods can be extended to work even with small datasets, where the number of objects is in the low ten thousands.
This article was originally published as part of my PhD thesis which is openly accessible here.
