Trends in testing (part II)

Elements authors
Elements blog
Published in
7 min readJan 19, 2016

Author: Danny E.

Elements was invited to join Testnet‘s fall event named Trends in Testing last October. Our test engineer Danny E. went to Nieuwegein to attend the talks and workshops, learned a lot and wrote down his experiences for all of us. This is part 2, find the first part here.

Evolution of Testing for Mobile Platforms

One of the other super interesting talks I attended was titled Evolution of Testing for Mobile Platforms and given by Paul Rutter, a delivery lead at BBC Digital’s Mobile Platforms team. He started off with a short introduction about the BBC and their mission and vision, which are:

  • Mission: To enrich people’s lives with programs and services that inform, educate and entertain
  • Vision: To be the most creative organization in the world

BBC iPlayer

The talk continued on the BBC iPlayer, which is an online streaming service for on-demand video and audio and live playback. The content is generally available for thirty days afterwards. The iPlayer app supports over 10,000 devices (9,000+ smartphones and tablets, 1,600+ TVs, game consoles and responsive web) and some devices support the downloading of streams for offline playback.

While the iPlayer supports a whole range of devices, the continuation of the talk focused on the mobile platforms, as more than half of their 315 million daily requests are coming from mobile devices.

Testing at BBC Digital

They are currently working with a test team of about twenty members to support three development teams, which have about thirty engineers in total. Around 50% of their testing happens in-house with the test team and for the other half they co-operate with an unnamed test partner. Their in-house testers are “embedded” in the development teams so they work very closely together. Their testers are divided in three roles:

  • Test Engineer: They are the actual testers, the device experts, they explore and question the product.
  • Developer-in-Test: They work on the automated checks/tests, are BDD (Behaviour-Driven Development) experts. They enable the application developers to write automated checks. They are also the primary contact for CI (Continuous Integration) and the build pipeline.
  • Test Leader: They have a role in a team, it isn’t an actual job title.

Besides the roles, BBC also has team called the TITAN team, which is a whole team of software engineers who work on building test tools and frameworks for use across all of the BBC Digital.

Mobile Compatibility Program

In 2014, BBC Digital started a program named Mobile Compatibility Program (MCP) to understand what their supported devices are across their mobile native apps. Within this program they can evaluate upcoming devices for compatibility in requirements for BBC mobile products and evaluate unreleased firmware updates for iOS and Android.

In this program they also advise on the classification of devices for products, which categorizes devices for specific features. This gives them better insight on what features will be supported by what devices, even before the devices and/or firmware is/are released. All results found in MCP are maintained and published internally as well as externally. MCP ultimately provides a way for OEMs to run a first pass compatibility in advance of submitting devices.

Principles of software development

After the MCP part of the talk, Paul continued on the BBC’s principles of software development. In 2013 another department of BBC defined five principles around their way of working. Recently the guys at BBC Digital refined these principles with a focus on Continuous Delivery. There are some strong messages in these principles, which made them rethink their testing processes and practices. The five principles are:

First principle: Radiate a clear visibility of state

By making all their work visible, BBC Digital can expose most delays, obstructions and invisible queues that otherwise would make it more difficult to release quickly. This includes pipeline visibility, statuses of regression tests and the device coverage of users. Another part of having a clear visibility of state is to have a new build available for stakeholders every day.

Second principle: Smooth the flow of work

One of BBC Digital’s main problems was for a lot of code waiting to be tested and long lead times because they try to work on too many different features at once. So they have been trying to limit the amount of work in progress and it paid off in getting better results. By focusing on the cost of delay when deciding what to work on, they can limit the amount of work in progress.

Third principle: Deliver at a regular and predictable cadence

As Paul said, their products are only useful in the hands of the users, so having the ability to release on a regular basis, predictable and cheap, is a vital part of the process. By releasing often, ideally every two weeks, there will be less functionality to test, much shorter (but more often) regression testing and shorter feedback loops with the users.

Fourth principle: Implement feedback loops

Feedback is very important to Agile software development. Feedback loops happen everywhere in the product development process, but the most important feedback comes from actual users. Building the “wrong” feature is the biggest risk and waste of time you can face, so focusing on validated learning by testing assumptions with empirical data and usage by the product users can minimize this risk and wasted time. BBC Digital has a dedicated UX person for the iPlayer product that is responsible for gathering user feedback and feeding it directly into the work they do.

Fifth principle: Improve collaboratively with experiments

As Paul was explaining their fifth principle, he continued explaining that BBC Digital is working in a complex and unpredictable business: people, processes and technology are all interacting in (often) unexpected ways. It seems difficult, if not impossible, to predict the effect if big change, so the best they are trying to do is to adopt a structured way to test assumptions and ideas. Every product, process and even organizational change should stand up by thoroughly testing hypotheses with experiments and measurements. So, they want to do more exploratory testing, work closer between developers and testers by pairing them and having test involvement already during planning stages.

Challenges

After explaining their five principles the talk continued on more details about their way for working, using the iPlayer product as an example. It started by Paul explaining they aren’t working in sprints anymore but rather work using challenges.

Stories are a maximum of five days making one challenge of five days. As said before, BBC Digital releases every two weeks, so features are developed separately from each other behind feature flags. The features are tested against the most important devices. They have no “Ready for Testing” lane during challenges. Everything is in the “In Progress” lane instead until it can be moved to the “Done” lane. They do this by using a Kanban-style board, with a limit on the “In Progress” lane. Testing is done by regularly showing developers directly any bugs found during the challenge. Bugs are fixed immediately as they are working on it as it is often not a lot of extra effort. Stories that are moved to the “Done” lane can be forgotten about as the feature tests are done, automatic tests are written and passed. The regression test pack with automatic tests is updated with the newly written tests.

By “test pairing” like this, BBC Digital can actually prevent issues before the need of detecting them. No issues are added as tickets while working on a feature during a challenge.

PUMA checks

Another initiative Paul was talking about is something they call “PUMA checks” (which stands for Prove core functionality — Understood by all — Mandatory — Automated). It is basically a way to make sure that all members of the team understand all automated tests. It is also used for fast feedback and to check the core functionality of the product. It makes sure that the most important part of the app can be tested and will work when releasing a new build.

Visualizing device usage

After discussing PUMA, Paul continued about an internal tool that has been built and supported by BBC Digital’s TITAN team to visualize device usage. The tool gives dynamic updated stats of device and OS version usage across all users of the product. This helps them in many ways: reminding them on how many millions of users they have, informing on what the most popular devices and OS versions are and the ability to intelligently target a percentage of the users with a particular device.

Final thoughts

Paul finished his talk with some points they are still working on, as they know this is an ongoing process of improving. Some of the points he mentioned are:

  • Live insights and operational notifications;
  • Address Automated UI test bloat;
  • Knowing, all the time, and at any time, exactly what their app is doing and how it’s behaving;
  • This will give them the ability to more confidently do less testing and listen more to feedback from actual users.

It was very interesting and refreshing to learn how an organization like BBC Digital implemented software testing and gave me definitely some food for thought on how we can improve our development and testing pipeline.

Follow Elements on Facebook, Twitter and LinkedIn!

Originally published at www.elements.nl on January 19, 2016.

--

--

Elements authors
Elements blog

A strategic design & innovation partner that moves brands forward.