Should We Automate the Planning System?
Automatic planning could provide new opportunities to help us build better cities, but introduces thorny issues of bias and accountability.
These days it’s very popular to say that the planning system is ‘broken.’ Everyone is fed up with it, from the communities violently displaced by dodgy “regeneration” schemes, to the overstretched planners who spend all day answering phone calls about mundane application queries, to the developers who spend thousands of hours and hundreds of thousands of pounds arguing over every tiny detail of their plans. The system is incredibly complex, often contradictory, and always critically under-resourced.
The English planning system is based on relentless negotiation and renegotiation at every stage of a project. This creates uncertainty for developers, heightening risks in the already risky business of building speculatively. Their response is usually to throw as much money and resources at the problem as possible, sometimes even paying the salary of a planning officer to ensure their application is expedited. Since local authorities typically have a fraction of the resources, skills and funding of the developers they are supposed to be regulating, developers often get what they want - at the expense of everyone else.
This is one of the reasons people get so frustrated with planning: often it feels like no matter what the local community says or does, all of the decisions have been predetermined behind closed doors. One of the most popular proposed ‘solutions’ to these issues is to automate the planning system. By ‘updating’ the planning system with increased data-collection and surveillance of cities, proponents believe local plans will become more astute, and planning decisions will be made in a more objective and transparent way.
This automated system is likely to manifest in two ways; the first is the introduction of algorithmically-generated decisions on planning applications. Some local authorities have already begun experiments to automatically screen householder development applications for compliance to planning regulations, before being assessed by a planning officer. Others, such as Milton Keynes, are hoping to entirely automate decisions on permitted development applications by the end of the year. While this will save planners a lot of time and effort on mundane, routine tasks, it remains to be seen whether more complex, large-scale planning applications will also be assessed by algorithms in the future.
A second way to automate the planning system is to create complex integrated digital models of the built environment, in what is becoming known as City Information Modelling (CIM). Similarly to Building Information Modelling (BIM), where a single model seeks to encompass all of the information required for a construction project, CIM is an attempt to quantify all of the qualities of a city and represent them in a digital model. This model could then use ‘machine learning’ to decide where new development should go, distribute public resources, and assess planning applications. In order for the CIM to reflect reality and accurately predict future behaviour, data must be continuously gathered from the city and fed to the simulation. This requires comprehensive surveillance of as many aspects of the city (and its citizens) as possible.
Putting aside the obvious privacy issues inherent in this proposal, CIM has the potential to improve the planning system in lots of different ways. Firstly, supporters argue that the only way the built environment can keep pace with the increasing rate of change in our society is by continuously assessing data and updating plans in response. This reactionary plan-making might address some of the issues caused by ‘plan lag’, where start-ups disrupt familiar patterns in unpredictable ways at a rate faster than local plans can moderate.
Secondly, as the algorithm will need quantifiable metrics to be able to weigh the merits of a proposal, CIM might lead to more regulation of building standards. This could help reduce some of the uncertainty and negotiability of the current system, leading to better outcomes for both communities and developers. Finally, if everyone was working from the same model — planners, developers, and communities alike — some of the current issues with transparency and back-door dealing might be solved. CIM has the potential to act as a great communication tool: communities would be able to see development proposals in their 3D context, and clash-detection software could visualise where planning applications deviate from agreed standards. Perhaps then knee-jerk NIMBY-ism might be tempered by a more informed discourse about the predicted impacts of different proposals.
On the other hand, introducing AI decision-making programs to public institutions also introduces a whole host of new ethical issues. One particularly concerning problem is that neural networks tend to operate as “black boxes.” It’s very difficult to follow the chain of logic they have used to make decisions, which means that explaining why certain decisions have been made becomes nearly impossible. One group looking at this problem of ‘explainability’ is the AI Now Institute, who have recently released an Algorithmic Accountability Policy Toolkit to help public bodies understand the potential issues with using algorithms in decision-making. They argue that spurious correlations can cause algorithms to make odd decisions, noting that, for example, “a model that explains that it denied someone a loan because they were born on a Tuesday is not very useful.”
In the case of automatic planning, this may mean that the CIM mysteriously decides to approve a 50-storey tower next to your living room window because each day an average of 17 dog-walkers pass by on their way to the park. Needless to say, this is not the path to democratic accountability and harmonious community consultation. There is also a growing concern that algorithmically-generated decisions tend to amplify biases inherent in datasets, a point made terrifyingly clear in Virginia Eubanks’ book Automating Inequality. For planning, this could result in aggressive social cleansing, as the algorithm seeks to rectify “underperforming” areas of the city.
Automating the planning system is therefore a complicated task. The current government’s war on local authorities means that planning departments have already been stripped back to their bare bones, and automating the entire system may sound like a tempting option. Though there is a lot of potential to improve the efficiency and legibility of planning decisions, automation also raises difficult questions about how decision-making should be carried out in the 21st century. As a society, we need to decide how much license to give to AI decision-making programs, and how they can be held accountable for their recommendations. Perhaps, then, the key to this project is not building a perfect CIM, but designing new methods of communication and avenues of redress between people and algorithms.