From constant listener feedback to continuous content improvement

En español aquí

Tommy Ferraz
Sep 4, 2018 · 5 min read

Hundreds of station managers, content directors, programme directors or show hosts are starting a new radio season these days. It’s the time of the year for repositioning stations, launching new shows, trying new contents, or replacing hosts. For a few weeks, or even months in some cases, there won’t be any possible way of assessing how those adjustments are working. Maybe total freedom during this short time is good for creativity, some will think. Others would prefer not to go through such a long period of uncertainty.

Waiting for the next ratings?

  • Frequency and granularity of both surveys and diaries are insufficient. Insights from these methodologies are not deep enough for certainly conclude what’s performing well or bad. On top of that, they arrive when it’s already too late.
  • Due to their nature based (directly or indirectly) on recall, they measure the strength of a brand, not the engagement to the content.

Focus groups and auditorium tests are research methodologies more specifically aimed at studying content engagement. Neither these solve the issue of insufficient periodicity, because of cost reasons.

Continuous and granular insights

Surprisingly, the effects of achieving that continuity in audience insights, broadly demanded by radio makers and broadcasters, are not as beneficial as expected.

Fred Jacobs, extensively experienced radio consultant, explains in the article PPM Turns 10 — Celebration Or Regret? the negative impact overreaction to those “EKG-like lines that dipped for commercials, new music, and DJ talk” had for radio. A friend who was programme director in Los Angeles when PPM was introduced, told me how scary radio programming suddenly became. Professionals were not ready to see audience drops and gains, quarter by quarter everyday. Specially in those formats or stations where loosing a few panelists implied a life-threatening audience loss.

A framework for continuous content improvement in radio

Technology

Methodology

Let’s stop for a second at a simple example. A music programmer or a music director finds out through the dashboard that a specific song is performing worse than average. What should she/he do? Remove the song from the playlist or decrease its rotation because the song doesn’t test well? Wouldn’t that be the type of overreaction we mentioned above? It would also be an oversimplification.

There are multiple factors affecting the performance of a song: the genre, the tempo, the mood, the era, the language, the sequence (position in the clock), the duration, etc. These are only internal variables, there are external as well: seasons, holidays, weather, prime-time TV, events, etc. Same happens with spoken contents, like segments in a morning show. Some internal factors are the topic, the host, the tone, the story-telling, the sound quality of a phone call, etc. You get it, right?

Our approach for continuous content improvement encourages radio professionals to first build assumptions rather than to hurry decisions.

Data-sience

Mindset

Through disciplines like Lean, Agile or Design Thinking, teams of developers, UX designers or product managers in some radio organisations are using similar frameworks for continuous improvement, also based on user-centrism and iterative learning. You can read more about this in my article Agile radio, continuous improvement ready.


If you believe your morning show and the professionals of content in your radio organisation would benefit from a framework for continuous content improvement, please, contact us.

Originally published at tommyferraz.com on September 4, 2018.

Tommy Ferraz

Written by

Founder of Voizzup and formerly radio Programme Director. Introducing continuous improvement in radio, both for on-air content and talent. www.tommyferraz.com