The Challenges to Inclusive, Open, and Smart Cities: Speed, Opacity, and Outsourcing
By Jean-Noé Landry and Suthee Sangiambut (OpenNorth)
With over 50% of the human population now living in urban areas, it is no wonder that we are constantly re-moulding and re-imagining our cities. No longer is it enough to be a big metropolis. Cities must be connected, self-aware, intelligent entities; they must be smart.
Over recent years, the term ‘smart’ has come to imply a certain idealised view of the city, where everything and anything can be monitored through sensors (the Internet of Things) and the city itself is governed by software that automate much of decision making and everyday service provision. These are being manifested in a variety of solutions from the private sector, often branded as smart city solutions. A 2012 McKinsey article claims they will result in “50 percent reduction over a decade in energy consumption, a 20 percent decrease in traffic, an 80 percent improvement in water usage, a 20 percent reduction in crime rates”. Such development, however beneficial, is not without its own risks. We argue that untempered enthusiasm for smartness can negatively impact our efforts to create transparent and accountable cities through open data and open government.
Canada has its own brand new case study of smart optimism in the case of Toronto’s Sidewalk Labs, an Alphabet company that will implement, in collaboration with Waterfront Toronto, a massive urban regeneration and smart city project for Toronto’s Eastern Waterfront. This has already raised concerns among local commentators, who this as an exercise in technological solutionism, and others who note this approach disregards urban planning knowledge and even politics in favour of algorithmic governance informed by market demand. Concerns have been raised over issues of equity, privacy, and government capacity, and a call has been raised for more debate on these topics.
Certainly, a collective pause with more time to debate may be a step in the right direction. However, with a topic as elusive as smart cities, what are the core issues we are talking about here? Any technology or solution can exacerbate problems of social inequity or data privacy. The difference is that smart cities, however they are defined, will drastically increase the speed and opacity of government decision making, while control over the tools and processes to make such decisions will remain outside of government. We see these as three problematic factors: speed, opacity, outsourcing. These characteristics of smart cities problematize the relationship between citizen and government, particularly our desires for government to be efficient or effective in delivering public services on the one hand and on the other to have government be more transparent, accountable, and engaging of the public.
Speed, or velocity, corresponds to the rate of data collection, processing, and flows through smart city solutions including sentiment analysis through social media data, Internet of Things (IoT) sensor data, crowdsourced citizen contributions (such as 311 reports). It also corresponds to the rate at which decisions are made. Smart cities that automate traffic regulation or the dispatching of police through predictive crime analytics, dramatically compress the time required to move from data collection to analysis to decision making, with little time for human intervention.
This also poses problems of citizen engagement in the city. The open data and open government movements, which governments around the world are committing to, require the provision of open data. Ideals such as open by default come into question. Open by default, a principle of the Open Data Charter whereby data and processes within government are made transparent to the public, will be difficult to achieve in a system that draws upon multiple, potentially real-time, sources of data to make decisions. The idea of open by default is seeing increasing adoption through the Open Data Charter, and it is framed within the context of government transparency and accountability. Politically, the idea that governments should be open by default is a good one — it improves citizen trust and engagement with government, and reduces the potential for corruption. However, open by default remains a product of the paradigmatic perception that government is ‘slow’. Smart cities are all about speed and this may result in tradeoffs with transparency and citizen engagement. Should we really be expected to monitor government transparency in real-time? What political recourse do we have to prevent undesirable decisions being made?
Intelligent systems, decision support, predictive analytics — these systems rely on models (run by algorithms) to process data and supply the user (such as a city official) with analysis outputs or a set of recommended actions. Opacity in the internal workings of software solutions hides the true source of power behind decision making to citizens and government officials alike. Without a dedicated team of computer scientists, geographers, software engineers, mathematicians, and more, on staff (a far-fetched expectation of any government), a city cannot hope to truly understand its own decision making and resource allocation when done through black boxed solutions.
As with the issue of speed, there are additional effects that opacity has on citizen engagement. Open governments ideally collaborate, with civil society and citizens, with the help of open data. We know that publishing open data presents additional challenges of data literacy and the digital divide. To engage with government via open data, citizens need to be able to understand and analyse data for themselves. This problem of literacy is exacerbated by the smart city — AI and algorithms cannot be interrogated or challenged by anyone (even government) if they are made legally and technically opaque.
Take IBM’s Intelligent Operations Center, a smart city solution that integrates multiple data sources (including government data, social media, and citizen reporting) into a single interface. When much of the data processing and analysis are done by the software, where is the point of entry for citizens to provide feedback to a city official? What social network analysis (SNA) methods were used to analyse social media data? Neither citizen nor city official will have a common understanding of how the decision was made, yet we place our trust in such solutions due to their provenance. This issue is complicated with the introduction of non-linear systems, such as neural networks, which can create unanticipated results from input data. While IT solutions may decrease the workload required to process data, public servants still need to be aware of issues, ranging from data processing to data privacy, in order to monitor system output. Spatial aggregation techniques have been demonstrated to be fallible in terms of privacy protection, with researchers having shown that medical patients can be identified from aggregate data. There are also tradeoffs, between aggregation to protect privacy and the need to preserve spatial distributions for data analysis. It is a city official’s duty to understand, communicate to citizens, and be accountable for their own decision making, and the excuse that technology may be too complicated (but results should still be trusted) will not be accepted in an era of open government.
Smart cities are complicated by the fact that most of these solutions, and technological development in general, are situated in the private sector. While programmers are plentiful in the labour market, information systems are becoming increasingly complex and out of reach for governments to build or maintain. This brings the threat that government will become hostage to its own outsourced IT systems, with an even greater dependency than the monopoly concerns raised in the 1998 United States vs Microsoft Corporation antitrust case.
Data collection and data management by government certainly is not perfect. In a world of data-driven decision making, it becomes ever more important for citizens to be represented by their data. Those not represented in data (such as a census or survey) will become invisible to the state and thus be disadvantaged by resource allocation decisions made by city officials. This inequity in data representation has been observed in the City of New York, and has been termed ‘data poverty’.
While smart systems, such as sensor networks, can potentially close these gaps, when such systems are opaque and outside government control it will become increasingly difficult to detect such areas experiencing data poverty. Government capacity (to control, monitor, and re-shape), the tools at its disposal is therefore incredibly important to maintain.
All three of the problem areas described above can be placed under one overarching theme: a loss of government control over its own data and decision making. Cities across the world will soon be forced to face the quickly approaching conundrum of aligning our service and efficiency needs (delivered through technology)while creating transparent and accountable government (delivered through the same technology).
Already, a list of critical questions is being developed by prominent Toronto-based open government advocate Bianca Wylie, with contribution from others including: Dr. Pamela Robinson (Ryerson University) and Dr. Renee Sieber (McGill University). One additional solution could be a litmus test, or checklist, for public servants to perform themselves. Such a test would be personal to the public servant’s responsibilities. We suggest a few questions by theme:
Data and decision making
- Do I know where data that supports my decision making comes from (and can I identify sources of data in aggregated datasets)?
- Can I access and transform the ‘raw’ data myself?
- Do I know how it has been processed and validated?
- Can I intervene in automated analysis or decisions in a timely manner?
- When I engage with the public, can I describe my decision making process accurately and completely, and demonstrate an evidence-based process?
ATIP and open data
- How do I ensure that data delivered through ATIP requests and open data is not personally identifiable?
- How do I maintain data integrity when delivering open data, while ensuring privacy?
- How can we perform a data inventory of smart city data to prioritize open data release?
The Future of Open Smart Cities
Those who attempt to rationalize the smart city trend as having bottom-up and ‘humanist’ potential make the mistake of viewing smart cities, AI, and algorithms, simply as a problem of application of technology. Even Robert Hollands’ argument that smart cities need to “create a real shift in the balance of power between the use of information technology by business, government, communities and ordinary people who live in cities…”, and that “the ‘real’ smart city might use IT to enhance democratic debates about the kind of city it wants to be”, misses a crucial point — if governments outsource their capacity (and responsibility) to shape IT solutions (i.e. they lose control over process), desired outcomes from such solutions are neither guaranteed nor are they sustainable.
Multi-stakeholder engagement and data sharing models, that enable us to identify shared interests, come to common understandings of values (including privacy), and negotiate the terms of collaboration, could be under threat in a smart city that disaggregates collectives and communities into individualized demand. The issues outlined above will become even more pressing with the recent announcement of the Infrastructure Canada’s Smart Cities Challenge, a major $75mn investment by the Government of Canada to pursue smart city solutions. Now that considerable investment and momentum is being generated with smart cities, our big challenge will be to implement our open government principles within the smart city. How can we move from just applying technology, to creating inclusive, open, and smart cities?
Follow us on Twitter OpenNorth | NordOuvert