WELCOME TO RV PERSPECTIVE

This is the third weekly-and-more update and comment on important things happening in architecture, technology, and sustainability by John Manoochehri of Resource Vision, and guests. Focus is international, with examples and links to work in the US, Sweden and the Nordic region.

RV Perspective is divided into the Short View and Long View sections, and will appear on Monday each week, with occasional extra editions. It will take between 5–15 minutes to read depending on how interested you are, and how fast you read.

Link / send this to other interested folks, and scroll below for more on how to interact: challenge the content, suggest other stuff, share your work, propose collaboration. Thanks! NB — highlight some text and a comment function pops up.

THE SHORT VIEW

HEADLINE NEWS

TEMPERATURE & TIME

In the last month, the news on climate change has … changed. Once again, the news has been the highest ever temperatures on record. But recent news we may only have decades before irreversible effects are on top of us, including chaotic weather and several feet of sealevel rise. This would mean “loss of all coastal cities, most of the world’s large cities and all their history”.

Are all urban developers, architects, politicians, citizens, running in panic in the streets? Nope. In fact, almost everyone tracking commercial real estate trends, in particular those from the investment world, sees change seeded with lots of financial opportunity—but no new mega risks (such as, well, no cities in 50 years).

So the big news is … no news? One cannot help thinking that this disconnect of market ‘insight’ from facts and timely action will mean certainly more use of computation in urban design in the near future. Because: computers don’t wait for decades — they don’t even wait for milliseconds. And they are ready to be take in information and churn out analysis. We might not like to give over city planning to the geeks and machines — but if we don’t plan to act, who or what else is willing to track the actual speed of climate science?

(Update: Remember Sidewalk Labs from RV Perspective #1? They are they are hiring, and they are hungry for big new projects. Here’s betting designing rapid urban climate resilience is one of them.)

THINGS TO WATCH

AUTODESK RECAP

The world is going nuts for 3D output: printers, movies, VR/AR. But what about 3D input? Where does that come from? Conventionally it comes from fully computer-generated models, or laborious real world samples. But that is changing. Getting 3D images seems easy but is hard.

To capture a scene in 3D requires not just about a ‘pointcloud’, i.e. a representation of what is in front of the camera in terms of its direction and distances from the lens. The pointcloud, when captured, outputs only a single surface, rather than separate objects, and no complete shape or context.

Instead, real 3D scanning involves inferring separate objects from limited ‘point cloud’ information, ultimately trying to spatially understand the geometry of objects and the space between them. It does this by cross-referencing overlapping pointclouds captured from different perspectives, and using algorithms to detect forms from the regularities revealed. This is what Autodesk ReCap does better than any other tool. Scanning a face or object? Kids’s stuff. Scanning a factory? Now we’re talking.

WATTY.SE

So why is there so little innovation in the energy-saving space — if it’s supposedly so important to business and consumer and producers?

One reason is that information is very poor. Even if people know how much they consumer, no-one really knows how they consume energy because individual devices don’t tell you.

Swedish startup Watty brings big data guns to this problem. It is using algorithms, developed through compiling and studying real-world energy consumption datasets, to decompose the household-level energy consumption data into device-level consumption signatures. Energy companies, device companies, cities, and consumers can all use this data — which probably explains why Watty keeps getting more money from investors.

THE SHORT EXPLAINER

ACTIVITY-BASED OFFICE DESIGN

One trend gathering momentum in office design is activity based working. There’s no fixed definition of this, nor any agreed classics of the genre.

But the general idea is that design and office with a focus on the actual activities that people engage in while working, and in order to work, is considered more promising than relying on some legacy logic of how people should work and baking that into a design.

This is not supposed to be the same idea as the Silicon Valley cliché of the playful, highly-serviced office, which is used to recruit and retain highly pressured, young tech talent. Nor is it the same idea as the co-working space, which promotes flexibility and informal connection.

But when you actually see these new types of office concept — it can be very hard tell them apart. Clusters of desks rather than cubicle rows. Large cafe areas for informal working and meeting. Diverse spaces and shapes. Beanbags for private laptop sessions. Lots of beanbags. Always more beanbags.

The one thing everyone agrees on though: how we work…isn’t working.

THE LONG VIEW

COMMENTARY

AI GOES URBAN

In December, Google and NASA announced that some form of quantum computation works: 100m times faster than normal processing. In March, Google’s deep-learning computer AlphaGo thrashed the world’s best Go player, Lee Sedol in a match 4–1. In the same month Google announces a partnership with the US Department of Transport to compute the hell out of urban transport data, and provide information-based solutions to congestion and more.

The first announced application is a directions app that gives you travel time to your destination including the time it would take you to park your car at the time when you travel — which amounts to a real-world disincentive to travel by car in many cases. What comes next should be obvious: the city, in every conceivable dimension, will be computed, increasingly by artificial intelligence, using machines for which scale and complexity are no big deal.

This will likely bring untold benefits to society, but people are already scared of it. They shouldn’t be. Computers are gentle giants. And they need humans to tell them what to do. The real question is: do we know what to tell them?

The computation community — commercial and research — has come to terms in the past decade with a particular insight: computers and humans have very different kinds of ‘intelligence’. Humans discover patterns, reveal implications, create meaning. Computers, generally, recognise patterns (that have already been discovered), process implications (that have already been revealed), and copy and transfer meaning (they are great at storing and moving data).

It is true that most powerful computers today are learning to learn. They use sequential information assessment models, so-called artificial neural nets, to generate a set of potentially interesting might-be-patterns; and use that semi-formed information to focus on things that might turn out to be interesting, which is so-called deep-learning.

But the world is vast and mysterious in ways that is literally invisible to humans, blessed as they are with complexity-blasting cognition.

AlphaGo didn’t beat Lee Sedol by brute-forcing an analysis of all possible moves and choosing the statistically best move — which is how the old chess computers used to beat Grand Masters. It didn’t do that because it couldn’t do that: the number of Go moves is vastly greater…than the number of atoms in the universe.

Instead, AlphaGo studied as many previously played games of Go as it could, and from that, using its deep learning, derived some pretty good ‘computational intuitions’ of how the game is played well. AlphaGo’s ‘brain’ is not an infernal machine cranking out inhuman solutions: it’s more like a seminar reviewing all greatest moves by great Go players of ages past. The ghost in the machine turns out to be actual ghosts of our culture: past souls and their experiences.

So if the next big computational challenge is the city, how will it work out in practice, now that we know quantum computing and AI are lining to have a … go?

Consider this: the universe has 10⁸⁰ atoms, while the Go board has 2 x 10¹⁷⁰ permissible arrangements. So even though the Go board is only a 19 x 19 two-dimensional grid, using only black and white stones that stay fixed in position — that’s enough to massively overwhelm the capacity of the world’s smartest, most powerful, computer.

And then, consider this: how many permissible arrangements … does a city have? A grid of essentially infinite points, in three dimensions, hosting objects of essentially infinite diversity, which can move and transform in essentially infinite ways.

So as big computation lines up to ‘help’ design the cities of the future, humanity has a handy message for all that hungry silicon: you better get your big boy pants on — because this ain’t no game of Go.

And thus we shouldn’t be at all worried that computers will try to make cities that don’t work for humans: they will learn what humans know about good city design in the same way as they learnt what humans know about how to win at Go. They will need it all the more because of the infinitely greater complexity of the city.

But what we should be worried about is that humans don’t know very much about good city design: we don’t have that much to teach the computers.

There are good reasons why the science of good city design is not developed. Partly, cities of the sort we live in are a few hundred years old at most. And cities didn’t, until the digital era, give us much data with which to test ideas and make theories — the sort of stuff we need to teach computers.

But there are also bad reasons for the lack of teachable city knowledge. Two of them are these:

  1. Cities were and designed by powerful people who don’t care about science.
  2. Those people don’t really want to know if cities work for the folk who live there.

The computers are coming, and they unstoppable. This is clear and people shouldn’t fight it. Because even a silicon army of quantum computers can’t boil the ocean of complexity that is good — even bad — urban design. Instead of ruthless overlords, these machines are more akin to model students: hungry for knowledge and narrowly diligent to a fault.

But before they start their engines — before we start their engines — we need to work out what to teach them.

THINGS TO STUDY

MATERIALS SCIENCE

Perhaps the most pernicious scientific confusion is to assume that knowing all phenomena are made of a finite set of constituents is some insight into the phenomena; rather than insight into the constituents. As it turns out, in almost every case, knowledge of their constituents doesn’t give us a solid grasp of phenomena. Chemistry can’t be just shaken out of physics knowledge, even when we understand the standard model of atomic and subatomic particles. Biology doesn’t shake out of chemistry, even if we understand the periodic table of elements.

This confusion, almost a reflex, leading to disciplinary and categorical reductionism in science — this phenomenon is ours, and we’ll provide the theory thank you very much — leaves so much knowledge on the table. Specifically, there are countless forms of knowledge and application that ought to be their own full disciplines, but instead are mutant hybrids struggling for visibility amid reductionism. One of these is materials science.

In short, there’s no magic, but materials science is not the same as physics, chemistry or engineering.

It’s just true, and yet it’s not well known, that the reason modern bullet-proof vests work, is as much to do with the facts of what we must call materials science (including weave, compressibility, energy dissipation) as it is to do with any physics-based or chemical or engineered properties of the materials.

This hidden knowledge is not even very hidden. For example, a block of a substance, and a fine powder of that substance, are different in ways that are profound from many perspectives including solubility, flammability, diffusion, reactiveness, and more. And yet this difference is not found in the pages of physics, chemistry or engineering text books. There is no ‘theory of powder’ in any of these books.

There’s not really much theory at all. Materials science is amazing these days because of all sciences, it has as much potential to improve, even save, the human race, as any of the classical science disciplines. And yet it sometimes feels like a pioneering series of experiments, like the early days of industrial science, when it seems like seeing what happens is just as important as working out how it happens. Reading materials science journals feels a bit like reading a magazine — the themes and concepts are often so very different from each other.

Materials scientists won’t readily agree. But if you happen to ask them for, say, the theory of how crystal structure relates to electric conductivity, or the theory of foam, or powder, or fluid diffusion, or energy dissipation, or compressibility, or even friction; or if you ask them the theory of how those theories relate to each other; or you ask what characteristics and configurations of materials are yet to be discovered — you’ll be looking, with them, out towards the edge of human knowledge. Even though you are talking about the stuff that permeates every moment of our lives — i.e. stuff.

Professionals will mumble importantly that materials science is founded on understanding some combination of the structure, processing, inherent properties and contextual performance. But even if that description were were the exhaustive list of features that make up material design, it still wouldn’t be a theory, just a list and descriptions. A list and description of the elements is not the periodic table; when Einstein wrote his theory of the photoelectric effect, or when Newton was presenting calculus, they were explaining things, not merely describing them.

So, pick up literally anything, and imagine manipulating and configuring the constituent substances. Not by changing the chemistry or the physics. Not by adding different substances or engineered details and functions: just configuring the same materials differently. And you are likely engaging in frontier science, right in your hands, right now. Not everything interesting in today’s world involves a computer.

SOFTWARE INDUSTRY

According to Dan Lyons, the Upton Sinclair of the modern software industry, tech companies don’t cherish their employees — they corral them into ‘digital sweatshops’ and bleed them for low-quality work. It’s soul destroying, apparently. But even if our view of software includes its worst, most industrial aspects — the ‘content factories’, ‘sales pits’, and other sausage-factory features— we ought to recognise this really isn’t an actual sausage-factory. Or any kind of truly industrial employment. It’s not physically brutal, it’s not generally dangerous, and in most cases there is more than a veneer of concern for mental and social wellbeing. RSI is not losing your hand in a lathe; demoralising corporate dogma is not the same as bonded labour or totalitarian party-loyalty demands.

In fact, the best of software worklife, and there seems to be a lot of that, is well-paid and pampered, as well as intellectual, somewhat creative and influential. So say the folks at Google where CEO approval runs at 98%. But Lyons is raising the curtain on the most important industry of our time — and for many reasons, is worth studying.

The worst aspect of the software industry is the same problem for all modern industries: productionism. More must be made, more must be sold, market share must grow, profit must grow, for ever, without fail, faster and faster, bigger and bigger. The horrors Dan Lyons speaks of at HubSpot are not an inevitable outgrowth of tech culture. They are a natural result of the economic culture in which all business currently operate. It’s a not a question of the tech leopard being unable to change its spots: the reality is, for all its scale, the software industry is just one of the spots on the economic leopard, which is resistant to change.

Where productionism itself comes from, why it exists, is a profound question, and the material prospects of the human race hang on it. One idea is that productionism is simply part of the application of logic to human material affairs. This doesn’t mean that productionism is necessarily logical — rather, it is merely logic-like: with inputs and outputs bookending intended, supposedly valuable transformations. And that is certainly a dominant feature of the software industry. Software is some kind of exemplification of logic, in fact: entirely logic-like, in some very real sense, even when what is produced — and why, how, for what purpose — seems itself entirely illogical.

But if logic, and it’s logic-like mindless manifestation, productionism, is natural software — how can it be that software is so closely associated to creativity? If anything, this balance and tension, between coded productionism, and unfettered creativity, is akin to the craft and design disciplines, where established principles and highly structured tools are daily in tension with undefined goals and unstructured development processes.

Creativity is surely good in itself — but not all products of it are good. And it won’t do dismiss the productionism as a quirky dark side of input-ouput logic or a necessary side-effect of creativity. At least one other driver is — money. Obviously.

Software money is not like old-timey investment in industries where all money is slow money, and where profits are how they measures an industry’s success and indeed its worth to society. Instead, a lot of the software industry is just a kind of weights-machine — which fast money uses to bulk itself up. A lot of focus in the software industry is not the product that the consumer experiences — but the meta-dynamics of the company, above all the stock that the investors buy and sell.

But software is having to grow up: it is literally joining the real world, vertical by vertical. Studying the character and soul of software in itself is interesting enough — watching it transform the economics and the soul and the power of so many sectors crack, soften, melt, reform and reform again is fascinating. It’s not wrong to say that technology is swallowing the world, but, when the likes of Uber, Theranos are confronted by the friction of changing transport and health, and many sectors remain open for change — food, real estate, most of manufacturing — we see the story and its lessons are just beginning. It would be popcorn time, were it not that everyone on the planet — as consumer, designer, policy-maker, businessperson, thinker — is at the centre of many of these transformations.

THE LONG EXPLAINER

PARAMETRICS

The world’s industrial sectors are being consumed in sequence by technology and software. One of the anomalies in this process is how far people — even in technology — believe that the real estate and construction sector has progressed down this path. The standard desktop of the modern architect seems to imply that the journey is advanced: complex looking interfaces with high-powered desktop machines, generating technical drawings, all day long.

The reality of the situation is much different. Modern architectural software is not in fact very … computational. Sure, the positioning and design of an object in 2D or 3D digital space takes some computing power, and the detailed representation of that object — its rendering — takes quite a lot. But these are computationally intensive only in the visualisation, not in the design.

What might a more computationally intensive design workflow look like? Basically anything that involves the design decisions and actions being computed, rather than just the visualisation requirements. There is in fact some blurring of this distinction already because some of the features of the design tools are already computing design ‘choices’, albeit the very smallest of them. For example, if one is drawing or scaling a rectangle — one of the most foundational design steps in architecture software and building design — and one constrains the rectangle proportionally, the computer is doing more, at that moment, than merely visualising the rectangle. It is also computing the breadth and length with respect to each other, in order to display the correct proportion of the rectangle at any scale.

Similarly, another almost equally foundational step, when one is extruding a rectangle into a cuboid, merely by pulling on that surface of the rectangle, the computer is required to assume that the pulling gesture implies intention not just to draw new surfaces up from the lines of the rectangle, but to compute these surfaces according to specific constraints: the bottom of the new surface must be constrained to the length of the line from which it is extruded, while the side of the surfaces must be the same as each other, and as long as the arbitrary choice of the designer. These constraints are the description of how to draw a square. This is very simple, and still so simple that it is essentially unnoticed by the designer performing the gesture: but these steps are computational, not merely representational.

To go further into how computation is seriously active in architectural design right now, it’s useful to understand the difference between form-giving and form-finding in architectural design. Form-giving is the process of taking some space in hand, and giving it a form of your choice: it’s the creative act of shaping space.

By contract, form-finding is the more analytic act of determining — finding — which shapes or spaces satisfy a particular constraint or consideration. For example, if I give a building entryway a shape representing the petals of a flower, I have engaged in form-giving. Conversely, if I want to find a spatial format for the entryway that minimises wind coming inside the building, I need to analyse phenomena including prevailing wind, and limit the worst options for the proposed design.

It would seem natural, because of its inherently higher analytical burden, that form-finding would be the place that computation in architecture would take root. As it turns out, it is in fact speculative form-giving that has been the breakaway leader in computational design.

The way in which this has taken place is that, in a simple version, one or more aspects of one of more of the architectural elements is identified with a quantitative variable: through changes in that variable, the architectural element changes. Those changes can be sophisticated or simple, and can be related to any other aspect of the design model or approach: but the point is that this variable is now a parameter of the design. Computationally, this means it is an ‘input’ to an algorithmic transformation. This simple conceptual step — identifying a quantified but variable part of the design as an input to a transformation process, that changes the current model into its next form — is the basis of computational, and thus, so-called parametric design.

The way in which this is used for form-giving is conceptually simple: variables, or parameters, are set up, and then changed with free creativity, to create shapes that would otherwise be much harder to draw, even imagine. If one has a series of lines that connect at right-angles, and extend for the same height and width— a conventional fenestration profile, for example — this kind of computational design could easily, through direct manipulation of parameters shared across all windows (such as angle, height, width), radically transform the facade. And add an additional parameter to the transformation action — for example, changing each column of windows more or less than the previous — makes for highly dynamic forms.

This is computational design, and it is very strong in contemporary architecture under the parametric design, or parametrics, tag. But the idea of parametric / computational design can be approach from a form-finding perspective also.

In this case, rather than take a notional set of parameters, and set them in highly speculative ways, very specific parameters are sought and these are manipulated or constrained in very specific ways. For example, one parameter for a building might be ambient temperature; and it might be desirable to keep the ambient temperature at any time of day around 22˚ Celsius. This parameter can therefore be kept fixed, and the computational model can ‘find’ forms that deliver this desired result. And this approach will, by necessity, choose to focus on some secondary parameters rather than others. For example, the percentage of the facade given over to windows (glazing ratio parameter) would likely be a critical factor in determining the ambient temperature at any one time. But the placement of the doors probably would not be a relevant parameter.

It is usually shocking, once they understand it, to technologists and scientists, that computers are by and large not being used to solve real-world problems in architecture, because they are predominantly used for form-giving as opposed to form-finding.

Where the opportunity lies, however, is for a new generation of designers to define the problems, evolve technical approaches to solving them early in the design phase, and include computers to the limit of their capability in solving them. This leads to the possibility that architects will need to be as focussed on the computably-optimisable problems of society, environment, and businesses, and connecting these to form-finding tools, as to classic form-giving creativity and stylish presentation. That might seem very new and avant-garde: until one realises it is just what has happened in every discipline that has embraced technology and computation, so far.

RESOURCE VISION

This is a time of change. The opportunity has never been greater to create spaces, lifestyles and systems, at all scales, which enable quality living, for more people, with less resources.

Resource Vision is an architecture and urban studio reinventing structures, experiences and productivity, in an era of transformation, led by John Manoochehri.

We base our work on tools and knowledge developed through practice, research, and teaching. Work happens in four categories:

Design | Tech & Lifestyle| Learning | Engagement

Check out the website for skills available — mostly around the issues mentioned above — in particular see the Work and Collaborators pages for lists of projects and partners.

Get in touch at twitter.com/resourcevision or hello@resourcevision.se for more, including comments and requests for content in future editions of RV Perspective.

--

--

Resource Vision
RV Perspective

Resource Vision is an architecture and urban studio reinventing structures, experiences and productivity, in an era of transformation.