Bringing Baggage Screening Into 2019
The baggage screening industry needs to adopt open architectures if it wants to take advantage of technology innovation.
Baggage screening at security checkpoints functions as a system that is hardware-based and closed network, two key and interrelated features impeding imaging advancements. It is nearly impossible for outside developers to build applications on top of imaging hardware, and the limited interoperability between hardware manufacturers is often possible only through cumbersome integrations, like modifying an x-ray sensor or overlaying elements onto an existing monitor. Baggage screening stakeholders should look to the medical imaging community for a roadmap to open architectures, a paradigm shift that produced massive gains in functionality, competition and cost for hospitals, doctors, patients and medical imaging manufacturers alike.
While this post makes some specific references to airports, the problems and solutions discussed are applicable to any infrastructure that relies on baggage imaging for security (e.g. courthouses, police departments, schools).
The State of Checkpoint Security
The primary innovation in baggage screening at security checkpoints, such as airports, courthouses, or other critical infrastructure, is hardware-based. X-ray machines have advanced from 2D single view to 2D dual view and developed spectral and dual-energy functionality. The most recent development being rolled out at airports globally is Computed-Tomography (CT) technology for cabin baggage, which produces 3D images. The Transportation Security Administration (TSA) has $71.5mm earmarked for the purchase and deployment of 145 CT units during 2019.
These hardware improvements have increased threat detection. Dual view machines provide operators with an additional image and perspective to look for threats. CT technology, used in checked baggage since 2003, improves the accuracy of pre-existing automatic explosive detection algorithms in addition to improving the ability of an operator to spot threats that might otherwise go undetected in 2D X-rays.
Reliant on Humans
Despite these improvements in hardware and some limited automation capabilities around explosive detection, checkpoint security remains almost entirely reliant on human operators to view images and identify threats. This is a job that has earned the unfortunate distinction of being both tedious and difficult. The most dangerous threat items, explosives and firearms, are rarely seen by a given operator or at a given checkpoint. Operators may never see a real gun in a bag through an X-ray, which predisposes them to miss this infrequently-occurring threat.
In addition to the difficulty in identifying rare items, operators are responsible for an expanding Prohibited Items List (PIL). Research has shown that increasing the number of items operators are searching for increases their reaction time; given that operators are constrained by needing to process a stream of passengers, this results in increased miss rates.
As long as the structure of checkpoint security requires operators to manually review each passenger scan for threats, it will be difficult for threat detection to meaningfully advance.
Innovation in baggage screening has revolved around hardware because the hardware systems are closed network. This means the data underlying the images from the X-ray machines is proprietary and encrypted, so only the manufacturers of the machines have access to the raw data. As a result, it is extremely difficult for outside software developers to build applications on top of the hardware.
Without a mandate from government to standardize data formats and create open systems, there is little interest from outside software developers to try and build new solutions.
As evolution on the software side is slow, the default for improvement has to be on the hardware side. On their own, hardware solutions tend to be more expensive, slower and more incremental than software solutions. This imposes an artificial ceiling on innovation in baggage screening.
In many cases, the only way for outside developers to gain access to raw data formats from hardware manufacturers is through government competitions. In a recent challenge to reduce false alarm rates in passenger screening, the TSA released the raw data on 1,000 images on which competitors could develop threat-detection algorithms. The TSA identified its problem as follows:
[TSA] purchases updated algorithms exclusively from the manufacturers of the scanning equipment used. These algorithms are proprietary, expensive, and often released in long cycles.
While the TSA’s competition is a step in the right direction, the approach is far too limited.
Checkpoint screening generates millions of images every day, making the industry the perfect candidate to benefit from the Artificial Intelligence revolution. To fully take advantage of the technology gains in big data and computing over the past few years, the TSA is encouraged to look beyond releasing a comparatively small dataset and focus on actions that bring change to the status quo as quickly as possible.
For an analogy, consider the mobile phone market. Apple’s release of the iPhone created a revolution in mobile phones with the introduction of a global operating system highly accessible to outside developers who wanted to build applications for iOS. This translated into a dramatically improved experience for users, who were able to instantly search and install apps across a huge variety of categories. The app store paid out $100 billion to developers, and Apple became a $1 trillion company. Prior to the iPhone, improvements in mobile phones were largely hardware-based, and therefore, incremental.
Counterintuitively, improvements in hardware remained relevant even after Apple’s app store grew to a size previously unimaginable. The software ecosystem did not commoditize the phone because many apps required better hardware. Hardware updates actually became more impactful because of the millions of apps they improved, which meant phone manufacturers continued to have a path to differentiation.
A revolution in security imaging that results in better hardware through a proliferation of software may seem fanciful, but a compelling precedent exists in medical imaging.
Medical Imaging as a Model: Digital Imaging and Communications in Medicine (DICOM)
History of DICOM
The medical imaging community previously faced challenges similar to checkpoint baggage imaging. Different machines (CT, 2D X-ray, MRI, etc.) from different manufacturers used proprietary data formats. There was no machine interoperability, and outsiders could not build applications on top of medical hardware.
Medical imaging faced the added constraint of a need to archive and centralize different types of scans long after they were taken, which was the forcing function for change in the industry. For example, doctors needed to be able to make side-by-side comparisons of an MRI and an X-ray or of two X-rays from different machines of the same patient over time. This was not possible in a closed network environment.
In 1985, the first version of the Digital Imaging and Communications in Medicine (DICOM) was released and adopted as the international standard for storing and exchanging medical data. This was borne out of the creation of a standards committee between the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA). Part of the success of DICOM is attributable to its engagement of multiple stakeholders. Today, membership of the standards committee includes dozens of machine manufacturers, users and general interest groups.
Originally created for imaging protocols in radiology, DICOM has expanded over the years to standardize imaging protocols for any medical field using imaging, from dermatology to cardiology.
How DICOM Works
The standard uses a file format and a communication protocol. Medical imaging equipment creates patient medical images saved in the DICOM file format. Doctors use DICOM Viewers, which are computer software applications that can display DICOM images. Machine interoperability and the free use of DICOM data are primarily responsible for the improvements in the medical imaging ecosystem over the past 25+ years.
- Machine Interoperability: Machine interoperability allows digital medical imaging to be independent of device manufacturers. When new imaging equipment is installed and plugged into the network, it can immediately query the medical imaging archive, retrieve images created by other systems and display them. Conversely, if the new system produces images, these images can be reviewed on any other vendor’s system already part of the network. This operability is possible without any changes or modifications to the system software, resulting in a true plug and play environment.
- DICOM data is free to download and use: Because the DICOM standard is published and non-proprietary, outsiders, such as algorithm developers, have been able to create applications for medical imaging. DICOMWeb enables anyone to retrieve, store and query DICOM images with industry-standard toolsets. Images can be analyzed remotely, stored offline, and be used to test the most cutting edge data analysis techniques, like deep learning.
At the same time, the DICOM standard pushes many device manufacturers to expand their software offerings. For example, GE created the GE Health Cloud in 2015, which gathers data from hundreds of thousands of GE imaging machines. This data is made publicly available, allowing doctors to collaborate online and independent software vendors to develop apps in GE’s cloud ecosystem.
Much like Apple’s app store attracted the interest of thousands of developers to build capabilities Apple itself was not actively developing, so the open architecture structure in medicine has greatly expedited improvements in software (especially algorithmic) capabilities. For example, the technology for automated CT scan processing already exists, while it is only just being discussed in the security industry. The driving factor behind this disparity is the existence of medicine’s enormous software ecosystem, which can be thought of like Apple’s app store. Thousands of developers are incentivized to pore over large datasets with the goal of creating tools that will be both profitable and mission-critical.
Such developments have helped hospitals and doctors’ offices get more mileage out of existing hardware because of constant software improvements and backwards-compatibility requirements. The accessible, multivendor environment has increased competition, reduced cost and made it possible for the latest technology trends to be quickly tested and incorporated into medical imaging.
While Security X-ray imaging technically does not face the same forcing function as medicine’s need to bring scans together offline (since X-rays of scanned bags are traditionally never viewed again), it would be a mistake to conclude that this capability would not be immensely valuable to security imaging. The concept of centralizing and processing image data would advance algorithm capabilities by enabling algorithms to learn from not only different machines at an airport but also different models of machines at different airports, greatly increasing the chances that algorithms for automated detection can be successfully developed and deployed at speed.
DICOS (Digital Imaging and Communication in Security)
Given the success of DICOM and the many similarities between medical and security checkpoint imaging, Digital Imaging and Communication in Security (DICOS) is being pushed as a similar solution for checkpoint security. Battelle developed DICOS for the Department of Homeland Security (DHS), and it currently has narrow use as the standard for CT Explosive Detection Systems. However, there is still significant change that must occur before baggage imaging is a true open architecture system.
Many vendors across both 2D X-ray and CT are not yet supporting DICOS on currently-deployed cabin baggage systems. Similarly, DICOS converters for proprietary data formats are unavailable. For developers who want to create algorithms or apps that assist operators in real-time, they must be able to access imaging data in real-time. There is currently no viable option for accessing or exporting DICOS data from machines during normal operations. The inability to access data in real-time is a nonstarter for any app that wants to be competitive. Checkpoint security already falls under enormous pressure to process bags as quickly as possible, so an application that takes multiple seconds to access data, slowing security down even further, will not be successful.
In order for baggage screening to reap DICOM-level benefits, there must be a mandate from government regarding not only a standardized data format across vendors but also the ability for the standardized data to be accessible in real-time for developers.
There is significant value trapped inside baggage screening’s closed ecosystem. DICOM is demonstrative of the feasibility of aligning the interests of many seemingly disparate stakeholders. By combining open systems with an industry standard, the medical community effectively leveraged the talent and interest of the broader technology ecosystem in collaborating to solve life-threatening problems in the most cost effective way possible. It’s time for checkpoint security to do the same.
Synapse Technology Corporation is an AI company using computer vision to help airport and venue security-checkpoint operators identify weapons and other potential threats.
Join Us: Program a safer world. See open positions.