Security Policies in SOCS

Ninadaekbote
Security Risks In Systems-On-Chip (SOCs)
5 min readApr 29, 2021

Modern SoC designs contain a large number of sensitive objects that must be protected from unauthorized access. Mechanisms which control the access to such assets are governed by complex security policies. The security policies affect multiple design blocks and may involve subtle interactions among hardware, firmware, OS kernel, and applications. The implementation of security policies in an SoC design, often referred to as its security architecture, is a part of coordinating design components distributed across the different devices. Design of security architectures involves a complex interplay of requirements from functionality, power, security, and validation.

Need of Security Policy

Modern embedded and mobile computing devices, e.g., smartphones, tablets, wearables, implants, smart sensors, etc. are increasingly getting used in a large number of personalized activities, including shopping, banking, providing driving directions, and tracking health and wellness conditions. Consequently, these devices have access to significant sensitive, personal data including our bank and credit card information, email contacts, browsing history, location, even intimate physiological information such as heart-rates and sleep patterns. In addition to personalized end-user information, these devices contain highly confidential collateral from architecture, design, and manufacturing, such as cryptographic and digital rights management (DRM) keys, programmable fuses, on-chip debug instrumentation, defeature bits, etc. Malicious or unauthorized access to secure assets in a computing device can result in identity thefts, leakage of company trade secrets.

Role of Policies

Security policies identify the authentication, access, and protection requirements for the different assets in the design. At a high level, the policies are typically instances of confidentiality, integrity, and availability requirements .The role of a policy is to define a and provide an specification for the SoC system architect and designer on the protection mitigation strategies that need to be implemented.

Types of Policies

  1. Access control:-This common class of policies defines which IP has access to an data and functionalities in the system execution.
  2. Information flow:-Information flow policies go one step ahead of access control by constraining what can be inferred from accessed data. Information flow policies are implemented by a collection of access control policies together with additional constraints as necessary.
  3. Liveness: Liveness deals with the requirement that the functionality of the system is not compromised through implementation of protection process.
  4. Time-of-check vs. time-of-use:-These policies ensure that the mechanisms deployed to ensure access control cannot be bypassed , by requiring that the authenticated agent is really the agent accessing the asset it is authenticated for.

Threat Models

In order to ensure that an asset is protected, the designer needs, in addition to the security policy governing the protection requirements, a comprehension of the power of the adversary against which to protect. Effectiveness of virtually all security mechanisms in SoC designs today are critically dependent on how realistic the model of the adversary is, against which the protection schemes are considered. This process takes into considerations such as whether the adversary has physical access to the system, which components they can observe, control, modify, or reverse-engineer, etc.

Following 4 categories will explain different types of Threats models.

  1. Unprivileged software adversary:- This form of adversary models the most common type of attack on SoC designs. Here the adversary is assumed to not have access to any privileged information about the design or architecture beyond what is available for the end-user and what is made public, but is assumed to be smart enough to identify or “reverse-engineer” possible hardware and software bugs from observed anomalies. The underlying hardware is also assumed to be trustworthy, and the user is assumed no physical access to the underlying IPs. Examples of these attacks include buffer overflow, code injection, BIOS infection, return-oriented programming attack
  2. System software adversary:-This provides the next level of sophistication to the adversarial model. Here we assume that in addition to the applications, potentially the operating system itself may be malicious. The difference between the system software adversary and unprivileged software adversary can be blurred, in the presence of bugs in the operating system implementation leading to security vulnerabilities: such vulnerabilities can be seen as unprivileged software adversaries exploiting an operating system bug, or a malicious operating system itself.
  3. Naive hardware adversary:- Naive hardware adversary refers to the attackers who may gain the access to the hardware devices. While the attackers may not have advanced reverse engineering tools, they may be equipped with basic testing tools. Common targets for these types of attacks include exposed debug interfaces and glitching of control or data lines. Embedded systems are often equipped with multiple debugging ports for quick prototype validation and these ports often lack proper protection mechanisms, mainly because of the limited on-board resources.
  4. Hardware reverse-engineering adversary:-In this model, the adversary is assumed to be able to reverse-engineer the electronic implementation for on-chip secrets identification. In practice, such reverse-engineering may depend on sniffing interfaces as discussed for naıve hardware adversaries.

Designing a Security Policy

These steps are mostly undertaken for creating a security policy.

  1. Asset Definition:-Identify all the system assets governing protection. This requires identification of IPs and the point of system execution where the assets originate.
  2. Policy Specification:-Identify the policies that involve in it. This is done by taking into consideration the threat models for the system.
  3. Attack Surface Identification:-For each asset, identify potential adversarial actions that can subvert policies governing the asset. This requires identification, analysis, and documentation of each potential entry point.
  4. The risk assessment and analysis:- Is composed of the following five components: (a) Damage potential;(b) Reproducibility; (c)Exploitability, i.e., the skill and resource required by the adversary to perform the attack; (d) Affected systems, e.g., whether the attack can affect a single system or tens or millions; and (e) Discoverability. In addition to the attack itself one needs to analyze the likelihood that the attack can occur on-field, motives of the adversary.
  5. Threat Mitigation:- Once the risk is considered substantial given the likelihood of the attack, protection mechanisms are defined and the analysis must be performed again on the modified system

--

--