Singularity is now SOC 2® certified!

Jeff Burka
Singularity
Published in
4 min readApr 24, 2023

--

Circular blue badge reading: AICPA | SOC, SOC for Service Organizations

Singularity is now SOC 2® certified! But why, and what does it mean? While we’ve always held data and customer security as our highest priority, large organizations with highly sensitive data like utilities and ISOs require third party proof of effective and comprehensive security measures. We engaged a licensed SOC 2 auditor to certify our processes and practices a few months ago, and we now hold the first of these certifications, which will be refreshed at least annually.

We have third-party approval of our practices in three areas: Security, Availability, and Confidentiality. In short, our services are secure against unauthorized access, always available when you need them, and we hold to the highest standards when handling confidential customer information.

Below, we outline our thinking for each of these three areas. This is not a comprehensive accounting of all our practices, but rather a few highlights of core principles that our infrastructure is designed around. If you’re interested in a more detailed accounting, please reach out to us for our full SOC 2 report.

Security

The most fundamental concept in security is minimizing the attack surface: you don’t need to secure an entrance that doesn’t exist. To this end, nearly all of our services run in a private network with no exposure to the public internet, and therefore no public interface that must be secured. To name some of the exposure surfaces we locked down: our internal network doesn’t allow ingress traffic; our servers do not allow traffic to them; we have a monitored firewall logging any external access; and none of our servers have public IP addresses. These are just a few of the ways we make sure there is no way to contact our internal system unless you are inside it.

Another essential, and often overlooked necessity, is securing employee access to critical systems. We can have the most secure server infrastructure in the world, but if an attacker gains access to an email account, or code repository, or cloud provider, it’s still game over. Given the power gated behind these logins, a simple username and password combo is not enough to guarantee security. Passwords can be stolen or cracked, networks can be compromised, eaves can be dropped. We require multi-factor authentication for all access to any kind of sensitive data, meaning logging in requires both secret knowledge and physical access to an object in the employee’s possession.

The final line of defense is encryption at rest. Like many software companies, we rely on our cloud provider (AWS) to keep their data centers secure and correctly implement the services they offer. But what if AWS itself exposes a vulnerability, or a rogue actor gains access to hard drives with our data on them? Fortunately, everything is encrypted, so an attacker would need our encryption keys as well as the data itself to get anything useful. Perfect security is an unattainable goal, but this multi-layered approach gives us the best defense against the broadest range of possible attacks.

Availability

Counterintuitively, achieving high availability is not really about keeping your services online, it’s about assuming they’ve already gone down and acting accordingly. We always have multiple copies of critical servers and databases running — if a service encounters an error or becomes overloaded, the next one up is available to take the baton.

Continuing down this enlightened line of thinking, we must assume that even redundant systems will fail. How do we recover from a disaster where whole networks become unreachable or a database is somehow deleted? We’ve never experienced this level of catastrophe, but we have planned for it, and run regular tests of our processes to ensure we could recover if it ever happened. One essential ingredient of this preparation is ubiquitous backups: all critical data and source code is automatically backed up and stored in a different location than the original copy, ensuring we’ll always be able to recover customer data and functionality even in a crisis.

Beyond the protection from errors and misfortune described here, we also strive to eliminate even planned downtime. Service and infrastructure updates are designed to happen invisibly, with no interruptions and no customer impact. As a user of any Singularity product, you won’t have to trade reliability for continuous improvement.

Confidentiality

The last and arguably most important factor in this discussion is minimizing human error. All this focus on software design and infrastructure hardening is useless if an employee emails a spreadsheet to the wrong person or leaves a critical webpage unsecured. To guard against this, we adhere to two principles: least privilege, meaning personnel who don’t absolutely need a particular piece of data don’t have access to it at all, and classification, meaning all data is assigned a level of sensitivity corresponding to a set of well-defined procedures.

Put simply, we make it as difficult as possible for sensitive information to fall into the wrong hands. Most employees don’t have access to it, and those that do are trained on how to handle it and bound to policies that strictly limit where it can be stored.

Similar to the idea of minimizing attack surface described above, the most confidential data is that which you don’t even have. When we’re done with a piece of customer data, we destroy it.

That’s a lot of information, but it’s just scratching the surface of what we work on every day to keep your data secure. If you’d like to see the full report or have any questions, please reach out to info@singularity.energy.

--

--