Call for AI startups for safer cities

Stonly Baptiste
Urban Us
Published in
11 min readAug 5, 2019

Many of our startups use artificial intelligence (AI) to fight climate change by improving city life from solutions to promote electrification to reducing negative consequences of urban density like traffic and public health. We’re increasingly seeing opportunities to invest in AI startups for public safety, but these bring unique challenges including data bias, privacy concerns, potential political abuse(11) and more. We’d love to see more opportunities to invest in increasing public safety and other city sectors(1) with startups(2) building AI-based(3) solutions and we think we can help overcome many of the challenges.

Public safety is an increasingly complex sector as urban populations grow while struggling with broader issues like crime, environmental resilience, and national security. The public safety sector includes emergency response teams like fire departments and law enforcement, who need help more than ever to leverage tech to keep us and themselves safe.

Consumer tech and internet platforms are playing an increasing role in public safety. Nest and Ring (Amazon) increasingly help in gathering neighborhood crime data. Amazon and Facebook are amassing massive datasets useful for identifying people. While these and other large firms are also shoring up public safety solutions through partnerships(12) and acquisitions(4), many new firms are being built with improving public safety as a first principle. One Concern* uses A.I. to help first responders prioritize maps for minimizing loss of life after a natural disaster. Mark43* helps police departments automate the organization of data within and between jurisdictions to make officers more efficient (and less stressed). Haas Alert* helps reroute traffic to help improve routing for emergency vehicles. 3AM Innovation* helps plan and track firefighters to optimize personnel and minimize lost lives. While most investment activity in AI is geared towards healthcare, finance and cross-industry platforms(13), our portfolio, including these four startups just mentioned, represents applied uses for AI to improve city life and sustainability across sectors like mobility, construction, and public safety.

We are increasingly seeing solutions focused on surveillance and incident or threat reporting. These approaches have already generated concerns, from privacy and mass surveillance, to racial bias and profiling. The introduction of any new technology to the urban fabric should be debated, as historically, tech has shaped our lives in cities for better and worse. And any new solution carries the potential to create negative externalities, like emissions (cars) and disequity (again, cars)(14).

AI startups for public safety can potentially help avoid repeating the mistakes of the past by, for example, tackling discrimination head on or providing new types of oversight for misconduct from public safety personnel.

AI-based solutions hold implications for cities even more impactful than automotive technology did at the turn of the 20th century (though AI is already changing the automotive industry(5)).

But “if deployed with great care, greater reliance on AI may well result in a reduction in discrimination overall, since AI programs are inherently more easily audited than humans.”, cites Stanford’s AI100 report(6).

“It’s a valid concern that AI may become overbearing or pervasive, but the opposite is also possible and AI may enable policing to become more targeted and used only when needed. We’re hopeful that AI may also help remove some of the bias inherent in human decision-making.” — Stanford’s AI100 report

The number of AI startups developing city focused solutions is encouraging, as it signals a new wave of upgrades for cities that have the potential to increase the quality of life, our relationship with the environment and public safety in cities. The opportunity is too large to ignore and spans economic, social and sustainability needs. So we’re inviting more dialogue around ethics as we seek to better support founders building the next wave of city upgrades.

Our investment process encourages dissent and debate and none generate more discussion than the AI startups for public safety. This dialog has helped us advance our understanding of the challenges in building ethical AI and benefits from our diverse backgrounds and lived experiences. We also benefit from having outside resources who speak for other stakeholders, including our startup’s customers themselves, who freely share their concerns and aspirations. Our growing network of thousands of beta customers across sectors, including citizens/consumers, business leaders, and city workers, have been reliably ready to test new solutions and provide feedback. Some of our companies have gone on to create stakeholder-led ethics boards to help monitor and shape the algorithms behind their solutions. We embrace the impetus to question approaches and risks, and the motivation of disparate stakeholders to help us find and support the most ethical yet game-changing solutions.

Our team has navigated issues around surveillance and monitoring technologies before, such as how to avoid personally identifiable data gathering when building classification models or how to keep private data out of hosted datasets. We’ve also helped create data gathering approaches and tools that combat the “lack of data challenge” while increasing privacy (small datasets result in models that have a high bias to avoid overfitting the model to the data(8). In one instance, though we replaced cameras (to protect privacy) with noise and seismic sensors we were able to generate highly accurate probability scores. Our startups have worked with our dedicated engineering and UX design experts to shape new interfaces and algorithms, source bias reduced datasets, and to flag and reduce bias inducing correlation. We also like helping founders think through platform security, as most solutions are either hosted by platforms constantly targeted for vulnerabilities or live on IoT devices where security is generally an after-thought(10).

Your ability to build solutions that take ethics into account is only part of the battle, knowing what those ethical guidelines “should be” is another. In addition to the tech elements of AI development, some of the deepest insights we can offer come from our understanding of how policy shapes what models and features you can and cannot use, no matter how accurate. To help keep best-practices and the best people to work with in mind, our portfolio of founders also help each other by sharing AI centric best practices, tools and relationships such as potential hires(7).

Privacy and data security are at the forefront of concerns in our increasingly internet-connected and highly surveilled cities, and although AI (which is inherently less capable of self-interested bias than humans(9)) has been vilified in many instances, there’s much more reason to be excited about AI than scared. We’re excited about the potential to make cities safer and more sustainable, so we steadfastly seek the best founders to work with. We wrote this to share our learnings from multiple decades of working on tech for better cities, our research over the last 6 years on AI and the changing political, regulatory and social urban landscape, and as a rally cry for founders to build more solutions for safer and more sustainable cities. If you’re building an AI solution for cities, or specifically an AI startup for safer cities, please consider working with us. We want to help you build not just ethical but also adaptable, generous or even benevolent AI for safer cities.

See below for reference articles, deeper comments cited throughout this article, and for more related reading(15).

Thank you for reviewing and nudging previous drafts of this along: Leah Edwards, Yi Han, Tamar Lucien, Caitlin Roberson and the Urban Us Team (Shaun, Liz, Mark and Anthony).

* — One Concern, Mark43, Haas Alert and 3AM Innovation are all Urban Us portfolio companies. You can read recent news on One Concern here: https://www.bizjournals.com/sanfrancisco/news/2018/03/01/disaster-response-startup-one-concern-cities.html

And you can see more of our portfolio at https://urban.us/portfolio/

1 — Why city sectors? Cities are already at the forefront of introducing new AI solutions to our lives, including self-driving vehicles, AI-assisted surveillance and AI-enhanced robotics for construction and infrastructure maintenance. Self-driving technology, in particular, is a unique nexus of mobility history, safety and tech. Mobility has been shaping cities since even before the automobile was introduced leading to increased economic opportunities but also negative consequences like urban sprawl and increased emissions. More recently, with the very concept of what a driver is being redefined, we’re seeing new questions arise about sprawl, safety and emissions (https://fortune.com/2016/02/10/google-nhtsa-driver/). Cities are an increasingly large portion of the global economy and population, which introduces challenges for security and the resilience of digital and physical infrastructure. We want good density to lower individual carbon footprint, to lower the environmental impact of providing utilities, and to increase resilience against environmental disasters. But with density comes safety and crime challenges. From retail and traffic to borders, we’re already being heavily surveilled and tracked, AI can help us be better. We share some of our learning about building and investing in urban technology for fighting climate change by upgrading cities in our Urbantech Startup Playbook and our new Urbantech Investor Playbook.

2 — Why startups? Startups offer us a chance to build and scale new AI-powered solutions designed for fairness and that can make cities safer for all. We help founders who are bringing new solutions and technology to cities by funding them and making sure they have the resources to embed ethics and scale into everything they build. With emerging technologies like AI, which hold broad implications to safety and privacy in cities, we have a unique opportunity and obligation to get things right this time. Before we get there though there are a lot of questions to answer about AI’s readiness as a deployable toolset for building various solutions, how ethics and privacy are factored in, consumer acceptance, and the evolving regulatory environment.

3 — Why AI (and what is it)? AI, to be clear, is not a panacea nor is it, as the name might suggest, true intelligence, in the human understanding sense. It’s also not Skynet. AI tech is a continuum of technologies that have evolved over decades from basic algorithms to more advanced machine learning like neural nets and reinforcement learning. Though today’s AI tools being used in production will seem as dull as flip phones in a few decades, even at the edges of AI research, the tools don’t propose to give meaning to pixels and other data points beyond what we train them to do. We have a responsibility to shape the evolving AI toolsets towards ethical usage and uses, such as improving public safety with less bias. For example, much of “dumb” surveillance tech has been used to solve crime, almost always with human bias. One use of AI might be to prevent crime based on surveilled behavior (movements) and not physical features. Most (justified) fears of surveillance are related to facial recognition, but might we use AI to avoid the need to recognize faces in the first place?

There are also a number of challenges to AI development itself, such as data quality and inherently biased training. AI is a work in progress at best and a pandora’s box at worst. It’s simultaneously a useless tool without large, useful datasets, and a weapon in the wrong hands, while holding the promise of being our greatest hope for better solutions for how we live, get around, operate and build better cities.

4 — There’s an overwhelming amount of activity in AI investments, acquisitions and policy-shaping. Microsoft recently announced a billion-dollar investment in AI research startup, OpenAI, to advance the development of generalized AI with broad potential implications on society and climate change (https://openai.com/blog/microsoft/). Google, one of the most active acquirers of startups, has continued to position itself as a “machine learning first” company (https://www.wired.com/2016/06/how-google-is-remaking-itself-as-a-machine-learning-first-company/). Even non-tech companies are starting to embrace AI as part of their core strategy (https://www.economist.com/special-report/2018/03/28/non-tech-businesses-are-beginning-to-use-artificial-intelligence-at-scale). Some AI startups have struggled to scale beyond soft landings such as Apple’s acquisition of DriveAI (https://www.axios.com/apple-buy-driveai-753da17d-60fe-44f9-84ff-1d2d82cd0b81.html). And some rollups may encourage debate, such as Taser’s recent acquisitions in an effort to boost their ability to analyze vast police video data (https://www.geekwire.com/2017/two-taser-acquisitions-create-new-axon-ai-group-boosting-effort-analyze-vast-police-video-data/).

We’re also seeing more policy debate around AI. Large tech firms like Amazon, with more than 100 million Alexa devices being used to gather data, and who wants nothing to do with how ethically it’s used, are prompting questions on how to handle user privacy and their position on the use of tech being completely up to society is being questioned.

5 — We don’t have to wait for fully autonomous cars to see the positive impact of AI on mobility. AI can start helping improve safety now by tackling the challenge of reducing the millions of crashes and resulting injuries and deaths. For example, automatic emergency braking, or AEB, already reduces rear-end crashes by 50%, and reduces crashes with injuries by 56%. (https://www.wsj.com/articles/self-driving-cars-have-a-problem-safer-human-driven-ones-11560571203) Might better AI-assisted driving, roads and signals help us get that to an 80% reduction, or 99% without AI fully taking the wheel? But if we do need fully autonomous vehicles to get to 99%, the implications for public safety include questions like “who is responsible when a self-driven car kills someone in an accident?”.

6 — The Stanford AI100 report, which we have written about before, in “acknowledging the central role cities have played throughout most of human experience” narrowed its focus on “large urban areas where most people live.” and explored other interesting areas for AI at the nexus of cities. AI technologies could help address the needs of low-resource communities, for example, predictive models are being used to help government agencies address issues such as prevention of lead poisoning in at-risk children and distribution of food efficiently. Public safety and security agencies and professionals might improve public trust by using AI. Well-deployed AI prediction tools have the potential to provide new kinds of transparency about data and inferences, and may be applied to detect, remove, or reduce human bias, rather than reinforcing it. (https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai100report10032016fnl_singles.pdf)

7 — Some frameworks and tools recently shared:

8 — Dealing with lack of data in machine learning https://medium.com/predict/dealing-with-the-lack-of-data-in-machine-learning-725f2abd2b92

9 — Notes on AI bias — Ben Evans (a16z) https://www.ben-evans.com/benedictevans/2019/4/15/notes-on-ai-bias

10 — “At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems.” https://www.eff.org/deeplinks/2018/02/malicious-use-artificial-intelligence-forecasting-prevention-and-mitigation

The Worm bricking IOT devices https://www.zdnet.com/article/new-silex-malware-is-bricking-iot-devices-has-scary-plans/

11 — In Hong Kong Protests, Faces Become Weapons https://www.nytimes.com/2019/07/26/technology/hong-kong-protests-facial-recognition-surveillance.html

Hong Kong Authorities Charge Dozens with Rioting, Igniting Clashes

https://www.wsj.com/articles/hong-kong-authorities-charge-dozens-with-rioting-igniting-clashes-11564510703

12 — Facebook, Google and others come together to set benchmarks for AI https://www.engadget.com/2019/06/26/facebook-google-and-others-come-together-to-set-benchmarks-for/

13 — CB Insights- ARTIFICIAL INTELLIGENCE DEALS TRACKER https://www.cbinsights.com/research-artificial-intelligence-startup-deals

14 — “How Cars Transformed Policing | Boston Review.” http://bostonreview.net/law-justice/sarah-seo-how-cars-transformed-policing 3 Jun. 2019,. Accessed 22 Jul. 2019.

15 — More related reading:

--

--