How we will select and review plugins that will be part of Planet 4

Kevin Muller
Planet 4
Published in
8 min readDec 27, 2016

During the discovery phase, we pointed out that before being used in Planet 4, the Wordpress plugins we will select will have to first go through an in-depth review process. Such process has now been defined, and will cover four essential areas: relevance in the context of Planet 4, functional scope definition, technical review, and security audit.
Here’s why and how this will happen:

Why review plugins ?

Planet 4 will be built on Wordpress, with a set of community plugins that may be customized or extended to match our various requirements. These plugins will have to match minimum standards: deliver good performances, be robust, actively maintained and provide a high level of security. We want to maintain a continuous level of quality throughout releases and avoid side effects such as security vulnerabilities, broken accessibility, bugs or regressions, all of which could occur in case of deployment of unstable features.

Credits: Ivan Donchev / Greenpeace

For all these reasons, each plugin we will consider to use will have to be reviewed in-depth before it can become part of Planet 4’s official plugin repository. To do so, we have developed a selection and review process that is easily implementable, reusable, and can be executed independently by all members of the Planet 4 technical team.

Whereas an in-depth review will be executed each time we’ll want to install a new plugin, a simplified one will happen for each update, to make sure no regression is introduced.

In-Depth Review full cycle overview

The initial plugin selection will be organized per key feature, some of which are already documented in the discovery document. They can be of multiple types: slideshow, image gallery, video, maps and so on, and for each key feature, we will organize a review following the flow below:

Module’s review cycle — Image by Kevin

What we would like to guide you through in this article, is PART 3 of the cycle: the actual Plugin (or Module) Review.

The Module Review organization

Organization of a module’s review — Image by Kevin

As mentioned before, the review process will go through four main steps:

  1. Relevance assessment: check the plugin information and make sure it is relevant to the context or scope of Planet 4.
  2. Functional review: explore the functional scope of the plugin and determine if the features provided match our requirements.
  3. Technical review: analyze carefully the source code of the plugin, from both back-end (see 3.1 below) and front-end (3.2) perspectives, and make sure it provides a robust implementation.
  4. Security audit: Finally, a security review with automated and manual static code analysis, to detect vulnerabilities to common attacks.

1. Relevance assessment

Who: web folks, content editors, technical team.
When: at the beginning of the review, before the functional review.
How: mainly by retrieving information published online, and on the Wordpress module page.
Deliverable: Relevance assessment report (starting from this template, in an ad-hoc folder).

The following aspects will be checked:

  • License: should be open source, modifiable and re-deployable;
  • Cost: should ideally be free;
  • Reputation: the module should be popular, have a substantial amount of users and positive reviews;
  • Maintenance: how many maintainers are behind it? How many unsolved bug tickets are in the backlog? Is the module updated frequently?
  • Compatibility: it should be compatible with the latest version of Wordpress.
  • Online presence: whether the module has its own website, support system, Git repository, etc…

2. Functional review

Who: web folks, content editors, technical team
When: after the relevance assessment
How: by installing the module on a staging environment and testing it heavily.
Deliverable: Functional review report (starting from this template, in an ad-hoc folder).

The Functional review report will be composed by:

Functional coverage

  • Functional scope: details of the functionalities covered by the plugin.
  • Functional gaps: comparison with Planet 3 existing features and with targeted scope.
  • Manual test: testing the plugin in a staging environment, and determining if it works as advertised and expected.

Browsers compatibility

  • Browsers compatibility test: is the plugin working well with the P4 supported browsers?

Documentation

  • Functional documentation: list of the available documentation.
  • Technical documentation: list of the available documentation.

3.1. Technical review: Back-end source code analysis

Who: technical team (back-end developer).
When: once (and only if) the module has successfully passed both relevance assessment and functional review.
How: by analyzing the back-end source code in depth.
Deliverable: back-end source code analysis report (starting from this template, in an ad-hoc folder).

Code quality, standards and best practices

  • Code standards: the source code should follow Wordpress’ code standards and best practices. The code will be checked both with a code sniffer and manually.
  • Implementation issues: PHP Mess Detector will be used to check overly complex or error prones portions of code.

Documentation
Documentation for the following topics will be checked:

  • Functions and classes.
  • Complex blocks.
  • Interactions with the core (or other modules).
  • Files and their usage.

Localization / Globalization
The source code should be fully internationalized (i10n) and localized (l18n) using Wordpress methods and standards.

Automated tests
We have to check if the module contains automated tests. We’ll focus on two types of tests, and try to answer to the following questions:

Unit tests:
Does the module contain unit tests?

  • Does the module contain unit tests?
  • Do existing tests pass?
  • What is the code coverage?
  • Are non-regression tests added for reported bugs?
  • Are the tests public (travis, etc..) and executed regularly through a Continuous Integration server (CI)?

Functional tests:

  • Does the module contain functional tests ?
  • Do the existing tests pass ?
  • How much of the exposed interfaces / scenarios are tested ?
  • Are non-regression tests added for reported bugs ?
  • Are the tests public (saucelabs, browserstack, etc..) and executed regularly through a CI ?

Extensibility
We’ll check if the code is designed to be easily extended or customized (a code is considered extensible if it contains entry points / hooks that enable a third party module to easily transform its primary behaviour).

3.2. Technical review: Front-end source code analysis

Who: technical team (front-end developer).
When: once the module has passed the relevance assessment and functional review, at the same time as the back-end source code analysis.
How: by analyzing the front-end source code in depth.
Deliverable: front-end source code analysis report (starting from this template, in an ad-hoc folder).

Overall quality and performance
The following criterias will be checked:

  • Default code has good performance.
  • Default images are compressed and small.
  • Plugin can deal with Responsive Images.
  • Plugin uses 3rd party code (e.g. CDN library, jQuery).

Code extensibility
The front end code provided by the plugin can be extended easily:

  • Default markup can be overridden.
  • Default styles can be exchanged.
  • Default scripts can be extended, modified.
  • Default scripts can be exchanged.
  • Default icons/image files can be exchanged.
  • CSS has low specificity.

Accessibility
The front end code produced by the plugin should implement a certain level of accessibility:

  • Default markup is semantic.
  • Default markup is accessible.
  • Images have an alternative text attribute (filled with appropriate text, if applicable).
  • WAI-ARIA attributes are used if no native markup element or attribute exists.
  • Hidden elements are shown/hidden properly for screenreader users.
  • Contrast of default styling matches accessibility requirements.
  • Plugin output is accessible via screen reader.
  • Plugin output is accessible via keyboard.
  • Plugin styles provide large enough font-sizes, click areas, margins between elements.

Separation of concerns
The plugin front end code should implement as much as possible the separation of concerns:

  • Separation of Concerns is implemented.
  • Plugin has an easy-to-understand, clear folder structure for the front-end part.
  • Templates are separated from plugin logic.
  • JavaScript is not used for styling purposes.
  • HTML has no inline styles.

Automated tests
Ideally, the plugin front-end code should have automated tests in the form of unit tests (to test the custom made javascript) and sensitive features such as user inputs validation.

Compatibility
As much as possible, the front-end code produced by the plugin should be compatible with Planet 4 front-end code standards.

  • Default styles are modularized, non-invasive.
  • Default scripts are dependency-free and modularized, non-invasive.
  • Default code is compatible with our browser support matrix.
  • Front-end is progressively enhanced.
  • JavaScript has failover checks.
  • Code is compatible with keyboard, mouse, touch input.

Localization
Finally, as for the back-end part, we will check at the front end source code level whether the strings are properly localized.

4. Security audit

Who: Technical team (security expert).
When: After the technical review, as last step of the module review process.
How: By running automated and manual static source code analysis.
Deliverable: Security audit report (starting from this template, in an ad-hoc folder).

Exploits search
The first step of the security audit will be to search in exploits databases (like exploit-db) to see if the module has already been broken by known exploits, or maybe still is, and how reactive the maintainers have been to fix the issue.

Automated security audit
An automated static code analysis scanner will be used to determine if the module is vulnerable to the most common attacks of type Cross Site Scripting, SQL injections, Remote file inclusion, etc… as described in the OWASP top 10 and SANS 25 top vulnerabilities.

To perform it, we will use an open source Web Application Security Testing (WAST) solution, OWASP WAP, in addition to commercial tools which have the advantage to go deeper.

WAP has been selected after a careful review of the different open source solutions available for PHP. You can see the complete open source PHP WAST tools review. Another review of non open source commercial equivalent solutions is currently in progress.

Manual security audit
In addition to the automated security audit, a manual audit will take place to ensure that the automated tests haven’t missed anything, and that there is no hidden vulnerabilities or illegal backdoors present in the source code, as described in this scenario, for example.

The manual security audit will also use the OWASP top 10 and SANS 25 top vulnerabilities as a base. Below is a sample of the bad practices we’ll be looking for:

  • Does the plugin include security tests ?
  • Are the user inputs validated ?
  • Does the plugin implement “prepare” methods for all sql queries, to protect against sql injections ?
  • Is the proper input variable being used (`$_GET`, `$_POST`, …), rather than $_REQUEST ?
  • Does the plugin change the default PHP or Wordpress Configuration?
  • Is sensible data exposed through this plugin ?
  • Does the plugin provide upload functionalities ?
  • etc…

Giving back

After each review, we commit to report back our findings to the modules maintainers first, and to the community after. We hope that this will help make the Wordpress ecosystem better, by identifying existing issues and helping to correct them.

Conclusion

We know that this review process is quite ambitious, and that the list of criterias we have defined may be a bit over demanding. We should mention, however, that all these criterias are not meant to be followed by the letter, and very few of them are disqualifying. They are mainly defined to be used as a benchmark so that we can compare plugins using quantifiable and measurable metrics.

Moreover, as we go through reviews and learn from each of them, this process will keep being improved and iterated. You can check the detailed review process if you want to see where it goes.

A big thanks to the whole technical team for their contributions and reviews, mainly Anselm, Tobias, and Remy. Also, this article would not have been possible without the awesome editing of Luca, cheers to him.

We are now waiting for your feedbacks. You can either get involved in modules’ review or help make this process better, just comment below, send an email to the team or tweet at #GPP4!

--

--

Kevin Muller
Planet 4

Serial entrepreneur / co-founder of passbolt / open human / Goa <-> Luxembourg