Inter-application vulnerabilities and HTTP header issues. My summary of 2018 in Bug Bounty programs.
For the last couple of years I have been participating in various bug bounty programmes. Usually these programmes are ran by security-mature companies who take a lot of effort to make sure that their applications are secure. So how is that even possible that they are still vulnerable to well-known issues like XSS or IDOR which should not exist in 2018 anymore?
This article will share information about common “inter-application” vulnerabilities encountered during testing process and emphasize the need of appropriate security testing at each stage of system life cycle.
Unfortunately, I’m not able to share all technical details due to non-disclosure policy, but hopefully, after reading this article you will be able to reinforce vulnerability testing process in your company or identify more vulnerabilities in bug bounty programmes.
I’m a security guy who participates in public and private bug bounty programmes. Among the others I participated in programmes of United Airlines, ING, RBS, EU CERT and number of private programmes via Synack platform (0x03 level).
Speaking about my background — bug hunting is not my full-time job. Probably on average I’m not even spending on that task more than 5 hours weekly. I’m not on the Google’s 0x0A list and I’m still getting easily impressed reading some of the others researcher’s write-ups.
Still, in the last 12 months I managed to report 27 unique web application vulnerabilities. Issues were identified in the web applications of the well-known global companies including e-commerce sites, security vendors, banks and airlines.
Probably it is not a world record yet, but it is just enough to share some thoughts on one specific type of issues which I was finding regularly in all types of applications of various companies — “inter-application” vulnerabilities.
Also, since I’m going to provide some statistics on identified vulnerabilities, this article may also be considered as an independent summary of year 2018 in bug bounty programmes from security researcher perspective.
To the point. I know that “testing philosophy” may sound somehow farfetched, but while searching for vulnerabilities in web applications I’m trying to follow one rule, which worked for me so far:
Security bugs happen where responsibilities for security testing are blurred or not defined.
What do I mean by that? I used to conduct vulnerability assessment and penetration testing for number of international organizations. Frequently, the testing environment I was provided with did not allow me to test all of the application functionalities, as testing scope included only one system and not all the other systems interfacing with the tested one.
I had to take for granted that:
- other systems interfacing with tested system were also subject to security tests
- integration tests (including security aspects) were conducted
- functionalities which were not available during the course of testing, were tested later on.
Unfortunately but obviously, reality was often different. Various applications were tested by various testing teams, who did not have full understanding of all the other systems, interfaces and functionalities. Security aspects of integration tests are often overlooked in the testing process.
Hope you understand my approach now, as it is relatively straightforward.
If someone is not able to test some application modules or simply is not asked to test something, the chances of security vulnerabilities in that component will increase.
As I already mentioned, I have reported 27 unique vulnerabilities this year. However, to give you a full picture of bug bounty programmes, additional 12 of my reports were rejected and another 22 were marked as duplicates.
I quickly learned that reporting obvious vulnerabilities, such as site-wide Cross-Site Request Forgery or scanner-detectable Reflected Cross-Site Scripting is simply a waste of time, as in 95% of cases it has already been reported by another researcher. Therefore, focusing on “inter-application” vulnerabilities which cannot be simply detected makes much more sense to me.
What do I mean by “inter-application” vulnerabilities? All vulnerabilities, which requires mingling in at least two different systems (or system components) to be executed successfully.
Distribution of identified vulnerabilities is presented on the chart below. It might be observed that majority of them are vulnerabilities which were present in the last 3 editions of OWASP Top 10. What is important to mention though is that identification or exploitation process for more than half of them involved some other system, component or application.
Some of my thoughts on specific groups of issues are presented below:
Cross-Site Scripting — I love reporting XSSes in bug-bounty programmes, as they are non-disputable and fast to report. Maybe that is the reason I spend most of the time looking specifically for them.
So what are the real-life examples of “inter-application” issues while speaking about XSSes?
- Group of web pages operated “Single-Sign-on-like” mechanism, so user details were replicated between the applications. Most of the times both input filtering and output encoding were implemented. Fun-fact was that on one of these web pages input filtering was not enabled, while on the other one output encoding was not enforced. Long story short — stored XSS.
- Registration in the application was not resulting in any email being sent to the end-user. Despite that, user was still able to access all application functionalities. Two days after the registration took place user received some kind of a “ Late welcome email”, which included several hyperlinks with parameters which were not encountered before. Result — reflected XSS with standard <script>alert(1)</script> payload.
In general, vulnerabilities coming from hyperlinks in emails are so common, that I have strong doubts if anyone even tests that.
Access control issues (IDOR and Unauthorized Access)
Well, another group of vulnerabilities which should not exist in web applications in 2018. Again, most of the time they exist as things like “security integration tests” or “inter-application data analysis” are not performed.
Another two real-life examples identified this year:
- Desktop application allowed to enumerate products identifiers via API. At this point of time it was considered a low impact vulnerability. However, later on it was identified that some other application of the same organization allowed to obtain e-mail address of individuals based on these product identifiers. Adding lack of rate-limiting mechanism, it was possible to dump large number of customer e-mail addresses in a short time.
- IDOR vulnerability, which was ultimately escalated to account takeover. Exploitation process started from accessing a hyperlink sent to user in an “invitation email” from one specific system component.
So again, two cases of “inter-application” vulnerabilities, which would never be identified during regular web application testing in an isolated environment.
HTTP Header issues
This is the class of vulnerabilities in web applications existence of which I somehow understand. Despite it is breaking some of the common web application security concepts (such as not trusting user provided input), it has never been described in publications such as OWASP Top 10 or OWASP Testing Guide. Most common example of this vulnerability would be changing email message content via Host header modification. For more examples of these issues you can refer to my OWASP Poland presentation or great Black Hat talk by @albinowax.
Speaking about blurred responsibilities and lack of testing in a proper environment?
Results of testing for these vulnerabilities might be different depending on your infrastructure and system’s architecture. So, an application for which no vulnerabilities were identified in your vendor’s dev environment may be insecure after you deploy it to your servers. Moreover, adding/removing a load balancer or insecure CDN configuration may also result in an existence of these vulnerabilities.
These issues are not well-known, relatively easy to identify, non-disputable and fast to report. These four things make them a perfect bug bounty vulnerability candidates.
Sensitive information disclosure
Extensive Google searching allowed to identify data or functionalities which should be restricted to authorized individuals.
These two cases which resulted in accepted vulnerabilities were a result of two factors:
- Sensitive data passed in GET request
- Incorrectly set “indexing protection” (such as robots.txt or meta tags).
What do these two identified cases have in common? Hyperlinks which included sensitive data were included in e-mail messages being sent by the application. (Did I mention that I have strong doubts if these e-mail sending functionalities are ever tested?).
What can be done to avoid these vulnerabilities?
I decided to split these recommendations to “Basic” and “Advanced” steps, as implementing some of these suggestions may require a significant amount of work.
- During testing process analyze all the application interfaces and non-standard way of inputting data
- Make sure that your TEST environment is as similar to PROD environment as possible
- Make sure that your integration tests cover security aspects such as input validation
- Make sure to follow defense-in-depth principle (e.g. by implementing both input filtering and output encoding whenever possible)
- Make sure that scope of your security tests include all of the system components and functionalities. Pay special attention to:
- Email sending functionalities and email messages
- Account creation and password reset process
- Authentication using 3rd party services (Google, Facebook,etc.)
- Ensure that you security testing team:
- understands the business processes supported by the application
- is aware of all interfaces with the other application and has a sufficient support to perform security scenarios during integration tests
- has a good understanding of the entire tested ecosystem
2. Regularly perform Google dorking to identify any cases of unintentional public access to sensitive data or functions
3. Analyse any cases of cross-system data dependencies which may lead to obtaining sensitive information by unauthorized individual. Performing such an analysis will require detailed knowledge of the specific functions and sets of data processed by each application
4. Analyse a possibility of implementation of cross-system segregation of duties to prevent attacks exploiting cross-system dependencies