Prepare Your App to Pass AppExchange Security Review

April 2023 update — This post has been an incredibly popular reference for people seeking a single point of reference on all things Security Review related. I’ve created a revised edition that covers the many changes that have occurred in the Security Review eco-system since this blog was originally released, it is available at the link below. The revised one will be the one going forward that is updated with latest changes. I will keep this original version here for reference. Jonathan

Salesforce Developers Blog: Prepare Your App to Pass AppExchange Security Review

You’ve built your app, it works great, and now you’re ready to release it to the world. But before you can publish your creation on the AppExchange, your app must pass security review. At Salesforce, nothing is more important than the trust of our customers. The security review process is here to validate that your app can be trusted with customer data.

This blog post includes all the information you need to ensure your app passes the AppExchange security review process.

What is this document?

This document is a collection of secure development best practices and a checklist of common pitfalls encountered during the AppExchange Security Review (SR). You can save time ahead of your security review by making sure you avoid these common issues. It is not an exhaustive list, and the field of security is constantly evolving. To avoid duplication, this document will link out to the authoritative source for a given topic.

Who is this document intended for?

It is intended for architects, developers, testers, product owners, and anybody else who is involved in the development or submission of a given package.

Why did my package pass the initial scan and not the deeper security review?

The Security review is a complex multi-layered process involving several tools and security specialists. The initial automated code checking tools provide a high-level assessment of your code, but it cannot find all security vulnerabilities found during a manual review as this is a constantly evolving field and requires human expertise.

I’ve fixed all the findings from the security review, but now I’ve failed again!

The Security Review report is not an exhaustive checklist of things to fix, especially if there are a large number of issues with a submission. It lists the classes of vulnerabilities found on your application, but not every instance where they occur. The role of the Security Review is to validate that your package meets current best security practices, has no known vulnerabilities, and is generally safe to promote to the AppExchange, where trust is our #1 priority. The Security Review is not there to find all the security issues for you, this is something that should be built into your development process and reviewed regularly.

I’ve submitted my package for review, when will it be reviewed?

There are a couple of steps to this — once you submit, the package is subject to some initial checks before it is placed on the bigger SR queue. This catches any major submission mistakes, like wrong version of package submitted, and gives rapid feedback that the package needs to be resubmitted. Once past this stage, it goes on the main SR queue. Due to the labour intensive process of performing the SR, there is a queue time of 6–9 weeks.

We launch tomorrow / next week, we need it reviewed now!!

The security review process takes time, and you need to factor this into your development and release cycle. In exceptional circumstances, priority can be given to a particular review, but please remember that this really means exceptional and still requires a Security Reviewer to become available, which could be multiple days — our reviews are thorough!

Our package failed Security Review, we’re resubmitted, can we skip the queue?

When you submit for a retest, you already have a tester assigned, which in a way is skipping the queue time and going straight to your tester’s queue.

Our package has failed multiple times!

There are multiple reasons that this could happen:

  1. Some issues fixed, but not all — remember, we don’t highlight everything, so check your code for all instances of an issue
  2. New code, new issues — has new code been added that introduces additional issues?
  3. Misunderstanding of the issues raised, meaning issues are not remedied correctly
  4. Bad security design — are there instances where your code is insecure by design? It may require re-architecting
  5. Have you thoroughly reviewed the solution yourself? The purpose of the security review is to confirm that the solution was designed with security in mind, they are not going to secure it for you.

I’d like to speak to the reviewer

This can be arranged, generally with a minimum 3 week’s notice as the service is popular. Security Reviewers have office hours and teams can book a session with them to discuss the findings of a review. If your package has failed a couple of times, it may be worth booking an appointment for just after the next review to speak to a security engineer.

False Positives

There are times when you have a legitimate reason for doing something in a certain way, and have taken measures to ensure security of the data. These instances should be clearly marked in code and comments should be provided to avoid ‘false positives’. However, if you find that you have to a lot of exceptions in your code you may need to consider if your code is following an anti-pattern and needs re-architecting

Security is a State of mind, not a tick box

The purpose of the Security Review is to validate you’ve taken all the necessary precautions. Many partners have their first package fail to go through Security Review first time round, and they make it their priority to not let this happen again, so they aggressively security review the code of all their packages and dependencies like external web-services. This is a win-win for everybody, it makes the Security Review process faster, and the partner is proactive about security — this is the ultimate goal.

Are there any tools we can use?

There are many tools available, each with its own focus. Use them to aid security analysis, but remember that security is a mindset and an explicit architectural process and while tools can spot particular patterns, anti-patterns and other issues, they will never have the full understanding of what the solution of what the solution is trying to do, or the mindset of a human reviewer.

  1. SFDX Scanner — The Salesforce CLI Scanner plug-in is a unified tool for static analysis of source code. It is useful to use this as part of your ongoing development process. Blog: Improve Your Code Quality with the Salesforce CLI Scanner
  2. Chimera — This is a cloud-based run time scanner service that can be used to scan third party websites. Note that Chimera is only for websites that that you own or can upload a token to
  3. Source Code Scanner (Checkmarx) — Source Code Scanner lets you schedule scans, download scan reports, search all the scans for your org and manage scan credits for your orgs.
  4. Checkmarx FAQ
  5. ZAP — Zed Attack Proxy is an open-source web scanner from OWASP project and can be used to scan third-party websites.
  6. Common Vulnerabilities and Exposures — CVE® is a dictionary of publicly disclosed cybersecurity vulnerabilities and exposures that is free to search
  7. Retire.js — There is a plethora of JavaScript libraries for use on the web and in node.js apps out there. This greatly simplifies, but we need to stay update on security fixes. “Using Components with Known Vulnerabilities” is now a part of the OWASP Top 10 and insecure libraries can pose a huge risk for your webapp. The goal of Retire.js is to help you detect use of version with known vulnerabilities.
  8. National Vulnerability Database — The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP)

So what are the common issues we can avoid?

Versioning

  1. Submit a point release (eg: 16.7→16.8 etc), not a patch release. System is designed to work with major and minor releases, patch releases are not supported.

CRUD, FLS

  1. Object (CRUD) and Field Level Security (FLS) are configured on profiles and permission sets and can be used to restrict access to standard and custom objects and individual fields. Force.com developers should design their applications to enforce the organization’s CRUD and FLS settings on both standard and custom objects, and to gracefully degrade if a user’s access has been restricted. Some use cases where it might be acceptable to bypass CRUD/FLS are: For creating roll up summaries or aggregates that don’t directly expose the data. Modifying custom objects or fields like logs or system metadata that shouldn’t be directly accessible to the user via CRUD/FLS. Cases where granting direct access to the custom object creates a less secure security model. Make sure to document these use cases as a part of your submission. For more information, please review the documentation for CRUD and FLS on the DeveloperForce Wiki.
  2. Lightning Security — Because Lightning code shares the same origin as Salesforce-authored code, increased restrictions are placed on third-party Lightning code. These restrictions are enforced by Lightning Locker and a special Content Security Policy. There is also additional scrutiny in the AppExchange security review.
  3. External Resources: Everything that your package and user interacts with is a target for the Security Review. It’s important that the Salesforce security team reviews every extension package. Even small packages can introduce security vulnerabilities.
  4. Article: Utilize Apex Security Enhancements to Reduce Development Time
  5. Apex security features WITH SECURITY_ENFORCED and Security.stripInaccessible are now generally available.
  6. Always include WITH SHARING or WITHOUT SHARING and always include the latter in False Positives

Insecure Endpoints

  1. Always use HTTPS when connecting to any external endpoints to push or pull data into Salesforce as a part of your application. Data sent over HTTP is accessible in clear text by any network attacker and poses a threat to the user.
  2. More info at https://developer.salesforce.com/page/Secure_Coding_Secure_Communications
  3. Every referenced external web service will be penetration tested, so make sure it’s set up correctly
  4. A common issue is where an external service is deployed to production in debug mode, causing it to divulge information (stack trace etc) if the penetration test manages to crash it by sending malformed data

SOQL

  1. Avoid common performance issues such as SOQL queries in nested FOR loops

CSS Styling

  1. CSS for LWCs should be in the component’s CSS, not inline
  2. Inline css is strongly discouraged and restricted by the Content Security Model
  3. If your css breaks another component, you’ll fail review
  4. Don’t use .THIS, fixed, absolute, or float in CSS. Components are intended to be modular and run on pages with others.
  5. You can add a false positive documentation for this — but needs to be well justified
  6. CSS can be an attack vector too, so it’s important to pay attention to this
  7. https://css-tricks.com/css-security-vulnerabilities/
  8. https://www.netsparker.com/blog/web-security/private-data-stolen-exploiting-css-injection/

JavaScript

  1. There are numerous JavaScript recommendations throughout this document so we’ll avoid repeating them here. One additional thing to consider is to check for legacy versions of libraries, especially jQuery. Legacy versions with known vulnerabilities will cause you to fail review, and is an easy one to resolve ahead of time. Also worth mentioning retire.js again, which can be run as a build task or as a browser extension

Content Security Policy

The Content Security Policy Overview is a great resource on how the Lightning Framework uses Content Security Policy (CSP) to impose restrictions on content.The main objective is to help prevent cross-site scripting (XSS) and other code injection attacks.

Web browsers follow CSP rules specified in web page headers to block requests to unknown servers for resources including scripts, images, and other data. CSP directives also apply to client-side JavaScript, for example by restricting inline JavaScript in HTML.

So many issues come back to points raised on that page that it made sense to replicate the main points here

  1. JavaScript libraries can only be referenced from your org — All external JavaScript libraries must be uploaded to your org as static resources. The script-src ‘self’ directive requires script source be called from the same origin. For more information, see Using External JavaScript Libraries.
  2. Resources must be located in your org by default — The font-src, img-src, media-src, frame-src, style-src, and connect-src directives are set to ‘self’. As a result, resources such as fonts, images, videos, frame content, CSS, and scripts must be located in the org by default. You can change the CSP directives to permit access to third-party resources by adding CSP Trusted Sites. For more information, see Create CSP Trusted Sites to Access Third-Party APIs.
  3. HTTPS connections for resources — All references to external fonts, images, frames, and CSS must use an HTTPS URL. This requirement applies whether the resource is located in your org or accessed through a CSP Trusted Site.
  4. Blob URLs disallowed in iframes — The frame-src directive disallows the blob: schema. This restriction prevents an attacker from injecting arbitrary content into an iframe in a clickjacking attempt. Use a regular link to a blob URL and open the content in a new tab or window instead of using an iframe.
  5. Inline JavaScript disallowed — Script tags can’t be used to load JavaScript, and event handlers can’t use inline JavaScript. The unsafe-inline source for the script-src directive is disallowed. For example, this attempt to use an event handler to run an inline script is prevented: <button onclick="doSomething()"></button>

Common Vulnerabilities

  1. Cross-site scripting (XSS) — Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it. An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by your browser and used with that site. These scripts can even rewrite the content of the HTML page. Stored XSS attacks are persistent and occur as a result of malicious input being stored by the web application and later presented to users. Further info: https://www.owasp.org/index.php/Testing_for_Stored_Cross_site_scripting_(OWASP-DV-002)
  2. Cross-Site Request Forgery (CSRF) — CSRF is an attack which forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker’s choosing. A successful CSRF exploit can compromise end user data and perform state changing actions on this data without the user’s knowledge. If the targeted end user is the administrator account, this can compromise the entire web application. Using custom headers (including methods as PUT) to protect from CSRF is not a perfect approach. You need still to implement CSRF token as an security in depth measure.
  3. Insecure Session Cookie Handling — All session cookies should be set over HTTPS connections with the SECURE flag. These cookies should be invalidated upon logout, and the Session ID’s stored in such cookies should be random with sufficient entropy so as to prevent an attacker from guessing them with any reasonable chance of success. Cookie values should never be reused and should be unique per user, per session. Sensitive user data should not be stored in the cookie.

Code Considerations

  1. Commented Code — Comments are allowed in small snippets and samples but full functions and classes which are commented out should be removed.
  2. Incomplete test documentation — It’s important that documentation is as complete as possible, including documenting your responses to false positives. This helps the reviewer understand why you may be something something a particular way that normally wouldn’t be best practicae, and understand what actions you’ve taken to mitigate any security concerns.
  3. Insecure Software Versions — When new vulnerabilities are discovered in software, it is important to apply patches and update to a version of the software for which the vulnerability is fixed. Attackers can create attacks for disclosed vulnerabilities very quickly, so security patches should be deployed as soon as they are available. Note: Incase you think this is a false positive, please submit a false positive document in the next retest with your reasons.
  4. Storing Sensitive Data — A brilliant resource on how to work securely with sensitive data. If your application copies and stores sensitive data that originated at salesforce.com, you should take extra precaution. Salesforce.com takes threats to data that originated at their site very seriously, and a data breach or loss could jeopardize your relationship with Salesforce if you are a partner. When storing Salesforce credentials like OAuth access tokens or SSO session information off the platform, make sure to follow industry best practices for secure storage on your development platform. Never store Salesforce passwords off the platform. For external secrets stored on Salesforce, make sure to secure storage mechanisms provided by the platform like protected custom settings, named credentials etc.
  5. Password Echo — Storing sensitive information in the source code of your application is rarely a good practice, anyone that has access to the source code can view the secrets in clear text.

Information Leakage

Information Leakage involves inadvertently revealing system data or debugging information that helps an adversary learn about the system and form a plan of attack. An information leak occurs when system data or debugging information leaves the program through an output stream or logging function.

  1. Sensitive Information in Debug — Revealing information in debug statements can help reveal potential attack vectors to an attacker. Debug statements can be invaluable for diagnosing issues in the functionality of an application, but they should not publicly disclose sensitive or overly detailed information (this includes PII, passwords, keys, and stack traces as error messages, among other things).
  2. Sensitive information in URL — Don’t forget that one of the simplest data transfer mediums is the url itself. Sensitive information passed via GET method (HTTP GET Query String) to the web application may lead to data leakage and exposes the application in various ways. Full URL is often stored “as-is” on the server in clear text logs that may not be stored securely, can be seen by personnel and may be compromised by a 3rd party. Search engines index URLs inadvertently storing sensitive information. Storage of full URL paths on local browser history, browser cache, bookmarks and synchronized bookmarks between devices. URL info is sent to 3rd party web applications via the Referrer header. Long Term secrets like username/password, long lasting access tokens and API tokens must not be sent in URLs.
  3. TLS/SSL Configuration — Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options. Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel. Ciphers such as SSLv2/SSLv3/TLSv1.0 should not be supported by the server, or Ciphers that utilize a NULL cipher or have weak key lengths. TLS 1.0 has been declared end of life by most systems, and should no longer be used. Testing for SSL-TLS. Please disable versions less than TLS v1.2 (correct at time of writing)

Conclusion

The Security Review process is designed to validate that you have made good data hygiene decisions and have considered security by design in your app. The more thought you place into security by design, the easier the submission process will be. This document gives you a head-start in things to do and avoid doing to make your review as smooth a process as possible.

Further Resources:

  1. Secure Coding Guidelines — This guide walks you through the most common security issues Salesforce has identified while auditing applications built on or integrated with the Lightning Platform.
  2. Platform Security FAQs — The following documentation provides answers to common security questions for the App Cloud platform. It also covers common false positive findings from 3rd party Security Assessments against the App Cloud platform.
  3. Developing Secure Code — Part of Lightning Aura Components Developer Guide, this gives a wealth of information on best practices, including deep dives on Lightning Locker & Content Security Policy

Salesforce Partner Community — AppExchange Security Requirements Checklist
10 Tips to Passing Security Review
Pass the AppExchange Security Review
Security Review Overview
Trailhead: Submit Your Solution for Security Review
Salesforce Security Guide

--

--