The Tech Year in Review — 2017–18

Arun Raghu
Hive Intelligence
Published in
10 min readNov 13, 2018

Over the course of the 2017–18 financial year, Hivint completed 241 technical security assessments ranging from hardware testing to whole of organisation penetration tests for our clients. The purpose of this post is to provide statistics around this year’s testing activities in order to help enrich overall public domain data in this area, and to allow us (and others) to identify and analyse trends in the evolving nature of assurance activities.

This is the second year that we’ve provided this reporting (for reference, last year’s post is here: https://blog.hivint.com/hivints-2016-17-tech-year-in-review-5229a6f542b4). This current post provides findings and observations from the current year’s findings, and where possible, comparison to last year.

Engagements — Industries

During the past two financial years, Hivint delivered security assessments to Australian and overseas clients across a wide range of industries. The following chart provides a breakdown of engagements across industries for the 2017–18 financial year (total of 241 assessments) as well as the 2016–17 financial year (117 assessments).

It’s clear that the number of technical security assurance activities has increased across all Industries with the exception of the finance sector. This is to be expected with the growth of Hivint (approximately doubling the number of staff in our technical security teams across Australia) meaning that we will naturally be able to deliver more projects, but also indicates that there is ongoing industry demand for these services.

As a testament to the dynamic nature of the security assurance industry, our main client sectors have changed in 2017–18 from the previous year. For the 2016–17 financial year our main clients were positioned within the Technology, Finance and Government sectors. While engagements in 2017–18 for the Technology and Government sectors saw a high level of growth, the number of engagements in the Finance sector decreased slightly compared to the previous year. Across this last year, we’ve witnessed a notable increase in the number and size of internal security testing teams across a number of Australia’s larger financial institutions, which may indicate that they are able to take on more security assurance work internally, as opposed to going to market for these services.

Engagements — Assessment Types

The 241 engagements were completed across a wide range of targets, using a range of assessment methods. To provide some representation of the type of engagements performed, we have attempted to categorise all engagements into one of eight different types. Categorisation of engagements was based on the primary assessment activity only (for example, an external network security testing activity may include an element of phishing as part of the assessment, however the engagement has been classified as external network testing only as that was the primary focus of the engagement).

Additionally, because there’s varying definitions across industry as to what constitutes a vulnerability assessment vs a penetration test vs a red teaming activity, we’ve not tried to categorise in this manner, but instead categorised based on the assessment type.

Additionally, with the exception of source code reviews, this list only includes activities where hands-on testing was performed. Activities such as design assurance assessments (where no technical vulnerabilities are identified, but instead design priorities / recommendations provided) as well as Hivint engagements such as incident response and forensics have been excluded.

We recognise that this is far from a perfect method, but the intent is to provide a general indication as to what assessment types are the most common for our business.

Findings

Through the 241 assessments undertaken in the 2017–18 financial year, a total of 1380 findings were identified. Findings which were deemed to not present a security risk (i.e. informational findings) are not counted in this set.

To assess the risk presented by our security findings, Hivint employs an ISO 31000 aligned risk assessment framework, considering common likelihood, impact and overall risk criteria. The tables below provide a breakdown of the number of findings per severity rating across both financial years, as well as percentage change from last year to this.

With a 106% increase in the number of assessments (from 117 to 241), and an 89% increase in the number of findings across these assessments (from 720 to 1380) between last year and this year, it would be reasonable to expect an approximate doubling in all risk severity types. However, this is clearly not the case with the volume of Very Low findings notably increasing.

From a review of these findings, we consider that this is most likely attributed to the significant increase in the number of web application security assessments undertaken in the 2017–18 financial year. Since the majority of Very Low findings were identified during web application security assessments (typically in areas such as unnecessary information disclosure) and because Hivint observed a considerable increase in the number of web application security assessments completed this year compared to the past two financial years, this increase is largely explained.

Across industries, the charts below illustrate the breakdown of findings (From the Extreme down to Very Low risk severity) for both years.

Based on the ‘raw’ data, it’s clear that the majority of findings for both years are presented as Low risks, with the number of findings tapering off as the risk severity increases. The data also indicates that the relative increase in findings across both years have remained consistent across all risk severities with the exception of extreme risk findings. This can be attributed to the strict requirements for a finding to be classified as extreme. A finding is only classified as extreme if it has a considerably high likelihood of successful exploitation and if exploited may lead to catastrophic consequences.

It is acknowledged however that these ‘raw’ numbers may be skewed due to the number and type of engagements performed for that industry — such that if the technology sector underwent the most engagements, then it seems reasonable that it would have the highest number of findings. To reduce this potential skew, the following chart has been provided which includes an average of the number of findings (for each risk rating) across the number of engagements completed for all clients in each industry sector during the 2017–18 financial year.

With this normalised data, most industry sectors followed a fairly predictable pattern of having a very low number of extreme issues and gradually increasing to a maximum number of low-risk issues, before tapering off to smaller number of very low-risk issues. More mature industry sectors (such as Financial Services, Government and Technology) showed a much more rapid drop-off as the risk of issues increased. That is, for these sectors, High risk issues as a proportion of all issues, presented at a much lower rate. We interpret this as a reflection of the focus over recent years in closing off the higher risk issues in industries such as Government, Finance and Technology and the higher frequency with which tests have been completed against these systems.

A few outliers from the pattern are the sports, miscellaneous commercial sector and the education sector. The pattern of the miscellaneous commercial sector is abnormal since it has the highest number of very low-risk findings and a relatively lower number of low risk findings; the education sector due to the fact it has a higher number of high risk findings and a comparatively lower number of low risk findings, and the sports sector due to the overall higher volume of findings (per engagement) across most ratings.

Monthly Breakdown

Across the year (and similar to last year) there is a clear set of peaks by way of the number of security assessment engagements, and subsequently number of findings identified each month. The below chart presents the number of engagements and findings per month across the 2017–18 period as well as the 2016–17 period.

The chart above contains a clear sequence of peaks by way of the number of security assessment engagements, and laterally the number of findings identified each month. The data flow in the chart is in line with our expectations in working in this industry, the peak periods tend to occur during the leadup to the end of the financial year and the calendar year, which (as noted last year) we attribute primarily to:

(1) the need to complete projects prior to the end of a forecast cycle (which in Australia is largely prior to the Christmas holiday period — “I need the project in by Christmas”), and

(2) the need to expend budget prior to the end of a financial cycle (which in Australia is primarily end of June).

Whilst overall engagements in each month in 2017–18 increased above the equivalent month in the previous year, there are two areas of particular difference:

(1) There was a considerable increase in the number of engagements across approximately October 2017 through to January 2018. Whilst we haven’t done extensive analysis to identify the reason here, we consider that it may be attributed to entities seeking increased security assurance prior to mandatory data breach notification[i] requirements coming into effect in February 2018 (applicable to entities subject to the Commonwealth Privacy Act 1988). October is also cyber-security awareness month, which may serve to raise the profile for cyber-security activities (such as security assurance projects).

(2) The end of financial year peak in this year (April — June 2018) only saw a minor increase in overall engagements when compared to the same period in 2017. We found that across this period there was an increase in the volume of non-assurance type projects (e.g. security design activities), which would have come at a cost of a reduction in the number of security assurance projects.

Common Weaknesses

To categorise our findings, we follow the Web Application Security Consortium (WASC) Threat Classifications[ii] where possible. This allows us to remain consistent between engagements, and provides for a transparent view of categorisation.

Out of 1380 findings across WASC categories, the top WASC categories comprised 90% of all findings. The illustration below visually represents the top 10 weakness categories that we found across the year.

The most common type of risk found was Insufficient Authentication followed closely by Application Misconfiguration. Insufficient Authentication usually occurs when an application lacks proper authentication procedures — and includes issues such as usage of default credentials, or if the application suffers from username enumeration. Application misconfiguration is generally encountered when an application is not configured with security in mind — and encompasses issues such as a lack of security headers or having default files disclose configuration details and application version information.

As with last year, the majority of findings relate to the insecure configuration of the target system (application, operating system, network device etc.), or failing to keep the system patched to address known security issues. Again, in-line with last year, in a large number of our assessments, the targets are off the shelf systems that do not include any custom development by the implementing organisation, only configuration. Whilst it is recognised that some of these security findings are outside of the control of the implementing organisation (vulnerabilities in the software itself), in the majority of instances if the implementing organisation was to follow vendor implementation guidance for secure configuration of the system (as well as any underlying infrastructure), and keep the system patched, then many of these findings would not exist.

The chart below provides a breakdown of the top ten findings categories across the 2017–18 financial year and the 2016–17 financial year, with the total number of findings shown as each bar, with the percentage of the total shown on top of each bar.

The data from the chart above indicates the top 10 finding categories have remained similar over the 2016–17 financial year and the 2017–18 financial year. An interesting change between years is Application Misconfiguration has been replaced as the most common finding by Insufficient Authentication in 2017–18. Whilst we aren’t certain as to the reason here, we consider that it could potentially be due to the introduction and availability of multiple new session management and authentication methods, which make it easier to deliver these functions in a secure manner.

Conclusion

The significant increase in technical security engagements and a relatively consistent number of findings per engagement shown in the analysis above demonstrates the growth and continuous maturing in the area of security assurance.

Anecdotally, we are starting see the results of industry developments and governing legislation such as the mandatory breach reporting in Australia and impacts on Australian entities stemming from the EU’s General Data Protection Regulation. These macro-level changes are gradually influencing the number and type of technical security activities that are executed.

We hope that the data presented here has provided you with some useful insight into our year’s technical assessment activities. And if you would like to see more material that we’ve shared from our engagements — such as security test cases, cheat sheets, common security findings and more — sign up for a free subscription to our collaboration portal at https://portal.securitycolony.com/register.

Feel free to get in contact with us if you have any further questions at info@hivint.com

[i] Introduced through the Privacy Amendment (Notifiable Data Breaches) Act 2017 and defined as the Notifiable Data Breaches scheme. Additional details here: https://www.oaic.gov.au/engage-with-us/consultations/notifiable-data-breaches/

[ii] http://projects.webappsec.org/w/page/13246978/Threat%20Classification

--

--