Buying AI for Security X-ray? Answer These 4 Questions
Critical Infrastructure and Perimeter Security Systems Will Benefit from AI
Last month, Customs and Border Protection made the largest fentanyl bust in U.S. history when it seized more than 250 pounds of the drug at a port in Arizona. Limiting the amount of drugs entering the U.S. is one feature of a broader debate dominating the U.S. news cycle: what is the best approach to border security? We’re at a peculiar moment where walls and fences are being considered at the same time artificial intelligence (AI) has the potential to deploy systems that are more powerful, scalable and cost effective.
Beyond the discussion of the U.S. southern border, critical infrastructure and perimeter security in general have historically relied on a combination of people, physical structures and hardware. But the advancements in AI and machine learning over the past couple of years have brought many viable commercial products to market.
Given the novelty of using AI to secure critical infrastructure and perimeters, it can be intimidating or confusing to purchase this technology for the first time. Here are 4 questions to ask for assessing and buying AI solutions.
1. How does the AI’s performance compare to human performance?
At Synapse, we sell an AI operator-assist solution that uses computer vision to automatically detect dangerous or illegal items in X-ray images. Our probability of detection can be as high as 98%. The first question some customers ask is, “Why doesn’t the system have a 100% detection rate?” It’s a fair question, as customers want to fully understand what they are purchasing.
However, that question needs to be asked alongside a second, equally important question.
“How do humans perform at this task?”
If humans have a 99% accuracy rate, then the value of the AI may require explaining. But if humans substantially underperform the AI, it’s likely the security system in question can be made more robust with human-AI collaboration.
Basing a purchasing decision in large part on quantitative performance data is helpful because it injects a measure of objectivity into the evaluation of the AI. Examining the AI’s errors in isolation or on an ad hoc basis can bias humans against AI, because the type of error an AI makes may be very different from the type of error a human makes. Our product, Syntech ONE, assists X-ray operators, but the operators still look at every image. As a result, the AI’s training has been optimized to detect difficult items a human has a high probability of missing. Over time, as the AI improves, it will eventually be able to fully automate the detection of certain items.
There are many examples of smart AI being “dumb.” In 2011, IBM Watson famously beat humans at Jeopardy! But along the way, it made what humans would consider to be an embarrassing mistake. In a “Name That Decade” question, one of the human contestants responded incorrectly with “What is the 1920s?” Watson proceeded to give an identical answer, asking, “What is the 1920s?” It’s safe to say most people would not have given the same wrong answer the previous contestant had given.
The temptation to view AI through a human lens is powerful. When possible, stress test humans and AI in controlled experiments to accurately compare performance.
2. Are you measuring success differently for hardware versus software?
Applying the same framework for assessing a hardware product as a software product can be difficult or unnatural. Questions like “How powerful is the X-ray machine?” do not translate nicely to digital solutions. Hardware can be tested and evaluated in the lab, and that performance will transfer directly to the field. AI is different. The performance of AI is inextricably linked to the type and quality of the data received, and field conditions can never be exactly replicated in the lab.
Further complicating the comparison is that software solutions may be solving for a problem in a way the industry had not previously considered. X-ray machines are built exclusively as imaging products. The industry standard for measuring security performance evaluates variables like resolution, penetration, and contrast sensitivity. Better image quality, while helpful, will not solve for the cognitive limitations that contribute to operators missing dangerous or illegal items. AI solves for detection.
Ultimately, security solutions should target the core problem in the most direct way possible. This requires hardware and software working together, with software playing a critical role.
Evaluating the efficacy of different vendors’ software products will depend on the answers to two questions: One, how effectively does the solution solve the core problem? At checkpoints, the core problem is catching prohibited items. Two, how fast can the solution be developed? Determining how quickly product iteration can occur is key to deploying resilient and adaptive security solutions.
If properly developed, AI will have a speed advantage over other software.
3. How vulnerable is the software solution to attack or manipulation?
Many traditional infrastructure security solutions, like physical barriers, armed guards, or X-ray machines, carry little to no cybersecurity risk. Naturally, customers are more concerned about the vulnerabilities to cyberattack or manipulation of solutions that heavily feature software, especially if that software uses AI.
Unfortunately, some news stories have done nothing to inspire confidence about how AI will perform in the field.
In 2016, Microsoft excitedly released to the world an AI Twitter chatbot named Tay. Users could tweet at Tay and expect human-like responses, with Tay specifically designed to tweet like a teenage girl. What made Tay unique from other chatbots was her ability to use the data from the tweets to improve in real-time. With each person who tweeted at Tay, Tay was supposed to become more intelligent. But in less than 24 hours, Tay was tweeting a range of racist, misogynistic and homophobic material.
In this example of runaway AI or “drift,” the performance of the AI diverged in a clear and harmful way from the intentions of the software developers.
So how do customers know drift won’t happen to the AI security solutions they purchase?
There are two technical decisions companies can make to prevent this outcome, both of which we’ve done at Synapse:
- Manually verify and test before updating algorithms.
When Microsoft released Tay, the AI used “online learning,” meaning it incorporated the data from other Twitter users to change and (attempt) to improve its performance in real time. While Synapse could deploy a gun detection algorithm with the capability to improve on its own while deployed on X-ray machines, we do not enable our product to automatically update. Engineers thoroughly test every algorithm to see how it will perform and then release updated versions of the algorithms to infrastructure sites. These updated versions cannot change after they are released.
Had Microsoft wanted, it could have released a version of Tay that did not change after interacting with Twitter users. That, however, was not the purpose of the project.
2. No Internet connection
Our systems are not connected to the Internet, and reasonable limitations are made on the interfaces accessible externally. This is the opposite of the internet of things (IoT), where all sorts of devices, from smoothie makers to pacemakers, are constantly connected.
4. Are you paying for hardware differently than software?
Businesses paying for hardware differently than they pay for software is nothing new. Software as a Service (SaaS), where businesses pay a monthly or annual subscription fee to receive a particular software service, has existed in various forms for decades. Customers pay a recurring fee because the systems are constantly being updated with new features. Popular examples of SaaS businesses include Salesforce, a Customer Relationship Manager (CRM) solution, and Workday, a human capital management tool.
Since physical infrastructure has traditionally been secured in a hardware-intensive model, customers considering purchasing AI may be less familiar with the idea of buying a solution and paying for it continuously rather than buying it one-time. A one-time purchase model makes sense for hardware, where the solution is static. Customers expect to own a physical asset in the form they buy it in for its useful life. An X-ray machine seven years after purchase does not function better than the X-ray machine on the first day of purchase.
At their core, AI solutions (specifically, deep learning solutions) are built to improve as they process more data. These improvements can be dramatic and occur over a time period as short as a few weeks or months. Synapse’s commercially deployed algorithms currently detect assembled handguns and sharp objects. Beginning in March, our algorithms will be able to detect handgun components (slides, magazines, cylinders, and receivers) and ammunition. This required just eight weeks of data collection and development time. To benefit from these updates and capture the full value of the solution as it changes over time, customers typically pay for AI on a recurring basis.
Purchasing AI solutions should be considered differently than other software solutions, which are not customized for each user. With traditional software solutions, customers may be able to pick which features they want, but a given feature is not different between users. AI solutions are often localized for each type of customer venue. At Synapse, our detection algorithms are calibrated to the Stream of Commerce data that is unique to each critical infrastructure location. Courthouses may see more keys, while airports may see more tablets and laptops. Our algorithms reflect these nuances by incorporating small amounts of the target site’s data into the algorithm that gets deployed.
Security is a cost-benefit analysis. Buyers of security solutions determine their tolerance for risk, and then they pay the appropriate dollar amount to mitigate that risk. Determining where those dollars get allocated can be difficult. By definition, novel technology lacks the performance record of technology that has been used for decades, but exercising too much conservatism carries its own risks. Buyers who over-index on the uncertainty of new technology will find themselves paying more for solutions that tolerate the status quo rather than change it.