Software/Web Security Testing = Human Trials?

Okta "Oktushka" N.
Software Testing/QA
4 min readJun 28, 2020

Software/web security testing is any test activity that attempts to exploit any of the following properties of software/web services:

  1. Limit: Quantifiable boundary e.g. max number of input length, max number of users per minute, etc.
  2. Restriction: Non-quantifiable boundary e.g. restrict user’s access to particular features of web pages, restrict access from outside VPN, etc.
  3. Predictability: Expected behaviours/responses e.g. getting the correct returned web page, correct login, correct denial of login response for invalid login attempt, correct URL, etc.

The Analogy: Software/Web Security Testing = Human Trials?

|On Software/Web = On Human|

  1. Test the size of input the software services can handle = Test the amount of food a human can take. You’ve prolly heard someone say “I can’t take it anymore!”.
  2. Restrict a normal user to full features of software/web pages = Restrict a human to colonize other regions. This is when the authority tells you “This is a private property! You can’t enter this premise!”.
  3. Request a web page = Ask somebody a question. Imagine somebody intimidating asks you “Who do you work for?”.
  4. Enter malformed input = Feed a person poisonous food.

Based on the above natures, security testing is basically any activity of messing around with software services a.k.a. fuzzing, which can be summarized and depicted with below figure.

Figure 1. Fuzzing Activities

Objectives of Security Testing

Fuzzing activities are meant to achieve the following objectives:

  1. Identify the input capacity of software services
  2. Identify the vulnerabilities that generate unsafe responses from software services
  3. Verify the correctness of security policies implementations
  4. Patch the discovered vulnerabilities

Categories of Vulnerabilities

I like to categorize security vulnerabilities based on two easy-to-remember groups, namely ‘Visible’ and ‘Invisible’.

  • Visible Vulnerabilities: It is the kind of vulnerabilities that can be visually tested on GUI e.g. brute forcing login page, entering malformed data in GUI’s input fields

Figure 2. Fuzzing on GUI

  • Invisible Vulnerabilities: It is the kind of vulnerabilities that cannot be directly visually tested on GUI e.g. testing the handling of request’s packet’s header options by the web server, entering malformed data in request packet’s header options. This can be done using cURL commands or GUI-based network packet crafter such Burp Suite, OWASP ZAP, Postman, JMeter, and whatnot

Figure 3. Fuzzing on Any Request Packet’s URL and Variable

Vulnerabilities Check List

A quick and sufficient check list of security vulnerabilities may be collected from the two reputable web security bodies: OWASP (https://www.owasp.org) and WASC (http://www.webappsec.org).

Software Security Testing Methods

There are presumably four methods to test the security of a software or web app system:

  1. Active Scan: Auto-fuzzing and descending auto-diving of web URLs for single or multiple vulnerabilities, typically done by security scanners like Burp Suite, OWASP ZAP, and whatnots; they normally have their own anomaly detection algorithms. The scan duration is usually quite lengthy as it’s mainly meant for comprehensive scan — subject to the number of vulnerabilities to check and depth of URL that we specify. It can be initiated quickly just by specifying the starting URL to dive.
  2. Passive Scan: Auto-fuzzing and manual-diving of web URLs for single or multiple vulnerabilities. This is done by web user’s manual browsing of URLs while putting up a security scanner as mentioned above as the web proxy. This type of scan is more flexible as the web user can jump from one parent URL to another parent URL a.k.a. random diving. It is mainly meant for targeted URL scans, hence it’s usually faster to complete — subject to the number of vulnerabilities to check and the web user’s width and depth of adventure. It is also the best practice to quickly check the security of request packet’s header options as summarized in below figure.

Figure 4. Example of Poorly Set Security Headers

3. Automated Request Transmission: This is similar to active scan, except that instead of relying solely on the security scanner’s anomaly detection algorithm, we improvise by sending our own creative fuzzing payload. A geeky practice for this is to build a set of raw web requests containing our creative payloads using cURL commands, and then automate the transmission of them using a script e.g. shell script, bash script, batch script (on Windows), or whatnot. We should also record the raw web responses for analysis, for example by using the same scripting language that we use for transmission.

4. Manual Check List: Manually verifying the security procedures; suitable for vulnerability tests that cannot or are hard to be automated e.g. policy reviews, analysis of token sequencing patterns, analysis of token decoding patterns, etc.

Choice of Security Testing Tools

The choice of security scanner may depend largely on below attributes:

  • Accuracy benchmark by Web Application Vulnerability Scanner Evaluation Project (WAVSEP)
  • Usability or ease of use
  • Completeness of fuzzing features

As the saying goes “Two heads are better than one”, thus you may combine multiple scanners if they would complement each other.

Conclusion

Scan ethically (ask/get permission first!), scan safely (get a test environment! Don’t scan production environment!), fuzz adventurously, and report concisely (pie charts are pretty).

--

--

Okta "Oktushka" N.
Software Testing/QA

SW QA, Internet of Things (IOT) Consultant, Solution Lead, TM Forum Associate. Worked at IT firm in Melbourne. Got PhD in IT from Universiti Teknologi PETRONAS