Smarter targeting or smarter content? Is a new standard of expected ethics forcing us to change how we think about digital marketing?

NSPCC Digital Team
NSPCC Digital Dunk
Published in
9 min readAug 1, 2018

I’m Sam, Senior Digital Fundraising Officer at NSPCC. With trust in social media in decline, and Facebook bearing the continuing fallout from the Cambridge Analytica scandal, today we’re looking at the questions now facing us as digital marketers around how we use technology to engage audiences in ways that are relevant whilst maintaining trust. It is worth noting that this is an opinion piece, they are my own views as a digital marketer and not those of the NSPCC.

The era of hyper targeted communication

What any digital marketing professional will tell you is that the relevance of a message, taking into account the timing, tone, environment and content, has repeatedly proven to help deliver impact and efficiency from digital marketing activities. Whether it’s a display advert showing you a pair of shoes you looked at 10 minutes ago (and will follow you around for the next month) or a paid social post pointing you towards your nearest fast food establishment after a heavy night, the use of technology to establish relevancy surrounds every internet user and is a core part of most, if not all, digital marketing strategies.

I often recall a meeting I had with a client when I worked agency-side about their upcoming marketing campaign which was looking to target a specific audience s for their new product. As a social media specialist at the time, I prided myself on knowing all of the options we could use to reach audiences using targeting inside out, and how to use these in a way that saved my client time, money and, most importantly, reached exactly who they wanted to speak to. Facebook was the logical answer.

Facebook is unquestionably one of the market leaders in self service ad technology, and alongside Google were forecast to take up over half of US digital advertising spend. They offer a platform which people can figure out how to use in the space of a day and is accessible to start advertising on for anyone with a credit card. As anyone who has used the Facebook advertising platform will know, the options for targeting its users are both wide ranging and can be very specific. You can target based on the obvious stuff, like age, gender and interests, but you can also target based on the less obvious. Are they an expat? What level of education do they have? Are they in a long distance relationship? And up until recently, until it was removed by Facebook following the Cambridge Analytica scandal, businesses were able to target 3rd party sources through Partner Categories. This allowed for the targeting of individuals based on income, co-habitation information and specific brand ownership.

So, going back to the meeting: how would we find people who would be most relevant to engage in the value proposition of that new product? Say, for example, do people with affinity to that product show certain interests or fall into a specific demographic or level of household income? What about targeting based on ethnicity? What about targeting based on sexual preference?

If those questions made you cringe, then you probably are starting to get an understanding of the challenge digital planners face. They are tough questions when you want to directly help a person consuming content that you believe will be beneficial to them. But what about when you’re chasing a financial or lead generation target? There are countless examples of how it can help audiences in need of that product or service. But at what point does it overstep a line and the rights of an individual that doesn’t know why or how they’re being targeted and, ultimately, is not beneficial to them whatsoever?

What can the Cambridge Analytica scandal teach us?

The wake up call. The Cambridge Analytica scandal showed the British public the degree at which personal data shared online could be abused. In the height of US investigations into Russian involvement in the US elections, this was another hit on the already questionable policies of social media platforms and their ability to uphold their user’s data privacy. Despite a 3rd party performing the act of data capture taking place by which clearly broke Facebook data policies, the knives were out for Facebook to take ultimate responsibility for the breach.

Facebook are now actively addressing the privacy concerns of users, both in terms of their policy towards data protection, giving more control to their users and also acting publicly through a major UK ad campaign which seeks to regain the trust of their users. However, the fact remains that both the data they capture and the communications opportunities available are open to unexpected and often harmful exploits targeted at their users. Last week, the UK select committee set up to investigate fake news and the use of data and “dark ads” in elections stated that Facebook had a “clear legal liability” and had failed to act.

As well as the legal implications, whatever their great aspirations are of allowing their users to build and maintain connections across the world, it has come at the cost of their public perception as well. An Edelman Trust Barometer released in January showed UK trust in social media is at a low across all type of media.

Edelman Trust Barometer

The concerns of the public were laid out in the report, with regulation, transparency and data protection sitting at the top of the concerns listed by those surveyed.

Edelman Trust Barometer

The trouble is, the tactics which are legitimate and common-place, like lookalike targeting, geo-targeting, interest targeting and demographic targeting, are being employed by most digital marketing departments to an extent. Using new forms of audience targeting which were not accessible before excites us, because we know that they could be an amazing new way for us to engage with our audiences in more relevant ways.

But do people genuinely find the outputs of these tactics useful? Are we better off to engaging people beyond clever use of tracking, such as influencer marketing or content marketing?

What about social listening?

Social listening platforms, which facilitate the process of crawling the web and searching for information useful to your business, were touted as the solution us marketers have been waiting for after years of walking in the customer insight wilderness of focus groups and customer surveys.

An example of a social listening tool dashboard

I’ve previously used social listening tools in previous roles for clients across a range of sectors, including higher education, entertainment, travel and even waste management. I would tell them, if the information was on a publicly accessible website, what people were saying about them or had said about them historically dating back several years. You can find their customers, their advocates, their detractors. Where are they writing online, and what are they saying? But really, what a few clients really wanted to know, and where they got the most value, was through understanding who their most influential advocates and detractors were and when were they seeing the most engagement or agreement.

There is nothing implicitly wrong with this. It’s useful information which helps marketers in how they should organise their communications and where a message will be most relevant. It helps their customers more that they’re basing their insight on research rather than basing decisions on the whims of a senior marketing executive or a statistically irrelevant study. However, right now most of these platforms are accessible to anyone with a credit card. You can set up a social listening account and monitor any public website or forum you like.

What we potentially have is another ticking time-bomb of misuse ready to be manipulated, open for an organisation, or even an individual, to use these platforms in an unexpected way. It’s worth pointing out that social listening tools monitor how their platform is used and have policies in place to manage that, but it’s typically no more than what Facebook provide as part of their policies to self-manage and protect them for misuse.

Facebook has clearly recognised this; they suspended the accounts of Crimson Hexagon only a couple of weeks ago on the basis of allegations that usage of their data broke their policies. So where does that leave marketers who rely on that data to make key decisions? Do they stop using these tools for fear of association? Should organisations start being more open about their research and the impact of its outcomes?

Does the future offer more to a more targeting savvy digital marketer, or a more ethical, content driven one?

With GDPR now in full force, businesses in the EU now have a clear set of guidelines to ensure they collect and process their customer’s data. But where where it gets cloudy is where businesses are targeting and collecting information about users through data acquired by platforms, in most cases based in the US, and relying on the consent they have acquired to target people with their marketing?

You could argue people know what they’re signing up for when they hand over their information. After all, they agreed to a policy. You could even say that as an advertiser it isn’t a business’s responsibility, it’s down to the platform to take responsibility for the data they collect. Both of these points are valid. But you could look at it another way. How would the people you’re trying to engage react if they knew why they had been targeted by an organisation they believe is legitimate? Would they say it was legitimate and did they consent to it?

Going back to the Edleman Trust Barometer, their recent study looked at Brands and Social Media. Some conclusions from the 9000 respondents from across 9 countries that were surveyed support how consumers feel about brands responsibilities:

  • 48 percent say it’s a brand’s own fault if its advertising appears next to hate speech, violent or sexually inappropriate content;
  • 47 percent believe that points of view that appear near a brand’s advertising and marketing are an indication of that brand’s values.

Taking that even further, common retargeting practices also didn’t get a particularly positive response:

  • 54 percent are uncomfortable with marketers tracking in-store purchases for targeting purposes;
  • 39 percent say it should be illegal for a brand to buy personal information from another company the consumer does business with;
  • 49 percent say they are not willing to sacrifice some of their data privacy in return for a more personalized shopping experience.

It’s also worth mentioning that we started our Wild Wild West campaign earlier this year to petition the UK government for independent regulation of social media networks. After it became a criminal offence to have sexual communication with a child, following our Flaw in the Law campaign, police reports have found that social media is used in over half of reported cases in which a child has been contacted. Since launching the campaign we have seen an enormous response from the public to that campaign, and as a result the Government has committed to creating new online safety laws. It is encouraging that there is finally movement on tackling this long standing issue which will genuinely help protect young people online.

So not only is there precedent for the public to push they organisations they share information with to be more transparent in how they use data, but based on responses to our campaigns alone, there’s also an opportunity for businesses to come out and lead by example, and benefit from the trust they build. It will be interesting to see how marketers tackle this new challenge, and whether the story of trust in digital can be turned in the favour of organisations looking to support the values of data protection their customers and supporters want them to move towards.

What do you think? Where does the responsibility end with data protection for advertising and research? How do Facebook regain public trust? Comment below!

--

--

NSPCC Digital Team
NSPCC Digital Dunk

We're the NSPCC Digital team writing and reflecting about what we're up to and what we're learning from. Follow us on here and on Twitter @theDigitalDunk