An Orientation to OSCAL in the DevSecOps Pipeline

GregElin
7 min readOct 12, 2022

--

I’ve recently spoken with DevSecOps teams trying to understand out how NIST OSCAL fits in the DevSecOps pipeline. The tooling and tutorials are still being developed, so here’s a quick orientation…

Figure 1 — NIST OSCAL Data Models Diagram

What is OSCAL?

OSCAL, short for Open Security Control Assessment Language (website), is a vendor-neutral data standard for describing the cybersecurity and compliance posture of IT systems in machine-readable form.

OSCAL was developed over several years by the National Institute of Standards and Technology (NIST) with community input to help automate and accelerate various cybersecurity compliance processes. Version 1.0 was released mid-2021. FedRAMP, which functions as a one-stop assessment and authorization service for vendors selling certain cloud-based IT services to the federal agencies, is a big supporter.

What is the DevSecOps Pipeline?

Beginning 2010, new tools and practices began transforming the manual aspects of configuring servers and testing/releasing software into a fully automated process known as Continuous Integration and Continuous Deployment (CI/CD). The various practices involved in CI/CD combined elements of software Development with IT Operations and became known as “DevOps”. The term “DevSecOps” is used to indicate that Security is also integrated into the highly automated, multi-stage pipeline of building, testing, and deploying IT systems.

Why OSCAL in the DevSecOps Pipeline?

Unfortunately, while our DevSecOps pipeline is automatically building, testing, and deploying the servers and software artifacts of our IT system in a matter of minutes, we are still taking months to manually create and update in Word documents and spreadsheets the compliance documentations and artifacts needed to make organization risk management decisions. This slowness of manual compliance leads to either constantly delaying deployment for compliance or making risk management deployment decisions based on outdated compliance documentation.

A DevSecOps team looking to also automate compliance as part of the pipeline might — and should — begin to look into OSCAL as a tool to collect and collate security testing evidence in order to update compliance artifacts.

Which Part of OSCAL Relates to Evidence?

If you’re on a DevSecOps team trying to figure out how to use OSCAL to managed security assessment evidence produced in your CI/CD pipeline, you want to look for the “relevant-evidence” tags that are buried in the “Result > Observation” section of the Assessment Result model. This is where OSCAL 1.0 wants you to share your assessment evidence.

Figure 2— NIST OSCAL Data Models Diagram

The “relevant-evidence” structure for representing data relies heavily on the “href”, “description”, “props” and “links” tags that are found throughout the NIST OSCAL model.

How to use these structures to capture and collate evidence makes more sense if we first understand the relationship among the OSCAL data models.

Understanding the OSCAL Models Relationships

NIST OSCAL 1.0 consists of seven data models, each associated with a compliance artifact intrinsic to NIST RISK Management Framework (RMF):

  1. Catalog model to represent a controls framework (e.g. 800–53 rev5)
  2. Profile model to represent a selection of controls (e.g., Moderate Impact baseline)
  3. Component model (not shown in Figure 2) to represent reusable, security providing components of a system (e.g., Single-Sign On service)
  4. System Security Plan model to represent system description and control implementation attestations (e.g., SSP)
  5. Assessment Plan model to represent a formal plan for testing security and compliance attestations (e.g., SAP)
  6. Assessment Result model to represent outcome of the assessment (e.g., SAR)
  7. Plan of Action & Milestone model (not shown in Figure 2) to represent findings and planned resolutions (e.g, POA&Ms)

It’s critically important to understand that each model towards the right builds upon and references data within the models towards the left. This right-to-left referencing is the meaning of the left pointing arrows in the diagram: the data builds up from left to right but the referencing — such as what control are in play — looks back down right to left.

OSCAL’s Relevant-Evidence Data Structure

Once the left-side data models have been populated for a given system, it becomes significantly easier to understand how to use OSCAL to manage evidence from the pipeline and include evidence using the various “relevant-evidence” tags in the Assessment Result model.

I find the NIST OSCAL models page a good place to look at the models. From there you can navigate to the JSON outline and XML outline of the Assessment model. Figure 3 (below) is screenshot of the XML outline of the Assessment Result model with arrows pointing to the path to the “<relevant-evidence>” XML tag.

Figure 3— NIST OSCAL Assessment Results Model v1.0.3 XML Format Outline

The basic idea is to store scan evidence output data somewhere retrievable (e.g., S3 Bucket) and then point to the evidence via the “href” and/or “link” tags. Use the “description” tag to provide a human readable summary of the evidence, and “prop” tag for further categorization and the “remark” tag for further explanation. The “prop” structure is intentionally abstract to provide flexibility. You can define a property name and a property value that is relevant to your operations. The other tags under “relevant-evidence” are fairly self-explanatory.

Building a Proof of Concept of OSCAL in the DevSecOps Pipeline

DevSecOps teams are finding OSCAL confusing because the standard is big and the ecosystem is still pretty young. Governance Risk and Compliance (GRC) vendors are definitely starting to support OSCAL, but these incumbents and newer OSCAL tools have been working their way through the OSCAL models left to right. (The evidence tooling is these tools boils down to their APIs accepting pushed data. This includes GovReady.) Also, I don’t know of any security scanning tools familiar to security teams that support OSCAL in production.

Since we are going to need a bit of custom tooling in the fall of 2022 to integrate OSCAL into the DevSecOps pipeline, let’s wrap up this orientation with a table top exercise of buidling a proof of concept of collecting and collating evidence in the CI/CD pipeline using OSCAL.

Since it is unlikely that our organization currently has a fully operational OSCAL infrastructure nicely populating the other OSCAL data models that make populating an Assessment Result easy, it’s going to be necessary to create some scaffolding.

The scaffolding we want is a simple Assessment Plan which our Assessments Result can reference.

First, we create by whatever means convenient — probably by hand — the shells of a “Generic Continuous System Assessment Plan” OSCAL document and a “Generic Continuous System Assessment Result” OSCAL document. These are just shells with the some initial data. These don’t need to be very detailed because. Over time we can flesh out the OSCAL content more and more.

Second, we decide on one or two simple evidence items we can easily collect and collate from our pipeline for our proof of concept. Here we need to think about the evidence in relationship to some specific assessment that would be defined in our Assessment Plan.

Third, we need to tweak our pipeline to make store the collected and collated the relevant evidence somewhere that we can reference by an HREF. Since this is a proof of concept, the storage of the evidence doesn’t need to be overly sophisticated. One thing we want to think through at this stage is the extent to which we tracking iterations of the evidence with iterations of the build and/or deployment. Are we going to only keep the latest evidence, or are we going to store historic evidence?

With our scaffolding in place, and our pipeline storing evidence in a reference-able location, we can dive into the heart of the OSCAL content related to evidence.

Our next and fourth step is to go to our Assessment Plan OSCAL content and flesh out the associated “assessment subject”, “assessment asset”, and “assessment action” OSCAL content for which our pipeline is collecting the “relevant-evidence”. We do this because we want risk and business decisions to be defined within the Assessment Plan.

Fifth, we create some tooling to generate the “relevant-evidence” OSCAL snippet that represents the collected and collated evidence. One approach is to pass some metadata to a simple script to output a snippet of OSCAL and push that snippet to a second script that will update our Assessment Result. Another approach is to retrieve our shell OSCAL Assessment Result document like a template and populate the appropriate tags within it. We then write the updated Assessment Result to a retrievable location.

From here, we run our pipeline, watch the automation create and move the content to the appropriate places, and work out the kinks. Before we know it, we’ve demonstrated the viability of continuously producing security and compliance evidence in our DevSecOps pipeline and how we can collate that data using the vendor-independent NIST OSCAL standard.

Welcome to OSCAL…

--

--

GregElin

Previous Chief-Data-Officer at governmet agency now creator of faster, better compliance tools at GovReady PBC.