The world of campaigning is about to change forever

Mike Joslin
13 min readMay 17, 2023

The Internet, email and social media have changed everything about our world.

Large Language Models (LLMs) such as Open AI’s ChatGpt and Google’s Bard are the next wave of tech that has potentially highly significant implications for political campaigning and advertising and need to be considered in detail by the Government and campaigners.

I love data. I’ve spent my career using data to drive campaigns and messaging. That started with a stint on the Obama campaign for the Iowa Caucus in 2007. The sophisticated voter profiles that we built across thousands of in person and online interactions allowed for targeted messaging that spoke to the real lives and concerns of the people we needed to convince. That campaign taught an entire generation of political campaigners about message discipline, valence, narrative control, movement building and use of data.

We had an approved answer for each area and what to say in each instance. I learnt very quickly that the only way liberal and left wing parties can win is if their voters agree with them or feel a valence with the leader or party that they will make them better off. It shaped my career and my philosophy of relationship based and data informed engagement. You have to use technology and digital communication to create a mass movement of engaged people who believe in what you’re doing.

I used those principles and adapted them to new technology in running successful engagement campaigns and spending millions of pounds on political adverts. For example, in running targeting in persuading Labour members and trade unionists to sign up to vote in for Ed Miliband in the Labour Party leadership contest — combining voter profiles and interests with relevant engagement online, via phone bank, text and mail. I’ve employed the same methods on hundreds of campaigns, including for the current Labour Deputy Leader and driving turnout for the most recent, historic NEU ballot where NEU was the only state school education union in England to meet the legal thresholds to strike and journalist Paul Waugh described us as “winning the data war”.

Over the years how I’ve done this has massively advanced on the hundreds of campaigns I’ve worked on but the key has remained the same. My colleague Henry Fowler has dubbed it ‘moneyball activism’ after the baseball film based on sabermetrics that uses data to win games. Traditionally a lot of this has been based on predictive modelling that has now advanced into being integrated into AI models.

I’ve built the methods and principles into a machine learning model and engagement tool at NEU called Communicator that allows constant and relevant contact with our reps and members. You can read more about this in the appendix.

These latest technology developments in LLMs means what I do can now be automated by machines, deciding what groups to target and what messages should be said.

So in writing this article, I don’t come from a perspective that tech and innovation is bad — quite the opposite. I have a real interest in machine learning. I can see the huge potential positives and opportunities in LLMs, but only if there is proper regulation. Unregulated LLMs pose a huge threat to democracy and I firmly believe their use in political campaigning and advertising should be banned until proper legal constraints and mechanisms for transparency are put in place such as licensing them in the same way we do pharmaceutical companies.

I’m going to detail how — fact checked by two leading data scientists from the University of Cambridge and University of Manchester — I got to that position here.

In AI you can have narrow AI, Artificial General Intelligence and Super AGI.

Within narrow AI you have different concepts.

Reactive AI is like the machine that beat the chess grandmaster Kasparov. It is designed for a specific purpose and can only do that. It cannot learn — it just completes the task to the best of its ability.

Limited memory AI can learn from and remember previous interactions. Most modern AI products fit into this category, including robots, Alexa, Siri and self-driving cars.

They are based on neural networks and other techniques that mimic biological brain patterns.

This can be done via supervised or unsupervised learning.

Supervised learning is where you use labelled text and ‘train’ a model how to react in those circumstances. This can use natural language processing (processing words by computer model).

Like for example the NEU’s use of amplify.ai to classify different comments on our Facebook page and automate response. Predictive modelling and algorithms such as those that form the NEU’s data pipeline behind Communicator are integrated into a model that uses supervised learning to assess member interest in different areas (we also use unsupervised learning) and group different types of members.

In these circumstances the model will only behave exactly how it is programmed to do.

Unsupervised learning is where models make their own choices often based on some form of supervised learning training and applying this to unlabelled texts or situations. Cluster modelling is a form of unsupervised learning as are large language models which I am about to explain. This means learning from patterns and spotting trends based on training data.

So this covers narrow AI (reactive AI and limited memory AI)

Artificial General Intelligence is where the computer makes its own choices and decisions. Think the film series The Terminator which is a very real possibility and not science fiction anymore.

Large language models like Chat GPT, that have caused such a recent sensation, sit somewhere between narrow AI and AGI. There is a raging debate as to how close to AGI it is, with a leading AI academic resigning from Google because of what he sees are the dangers of it becoming sentient. The debate covers how close we are to AGI and the ethics around it.

What is a large language model and why are they so potentially dangerous and consequential?

Without hyperbole they are one of the most consequential developments in human history. The field is also moving at such a pace there are new developments every day. The field will be different in a week to how it is at the time of writing this article. This makes it, in my view, dangerous.

They can use a mixture of supervised and unsupervised learning based on a set of training data to make autonomous decisions. They can also be entirely unsupervised.

So for example Open Ai has trained Chat GPT on billions and billions of data points with a September 2021 cut off.

In order to respond to a request Chat GPT then searches its training data and uses pattern recognition to detect the right pattern to answer the question or task and uses probability scoring to instantly calculate the chances of it providing the right answer. This is unsupervised and decided by the model.

AutoGPT for example, a new technology built on GPT-4, allows users to use code to train GPT-4 in a supervised way to complete specific tasks. For example asking it to organise a birthday party. You would set parameters and budget and autogpt would send out the invites, buy presents on Amazon etc.

Supervised learning is how Open AI, the creators of Chat GPT, have trained the model to not be offensive. This Time article shows that Open AI paid a Kenyan company to label tens of thousands of offensive phrases.

This was inserted into the model. Chat GPT then makes unsupervised choices on the training data.

Open AI chose to do this, but you could create an LLM that didn’t do this and this is where the dangers begin.

The potential for automation is extreme. Automated tweets, Facebook posts, letters, emails, buying things. Essentially any action a human can do can now in theory and often in practice be automated by large language models.

Large companies are developing and using LLMs at pace.

Facebook for example, after Apple’s changes which disrupted their advertising model, invested heavily in deep learning LLMs to power their advertising.

This FT article shows how successful it has been and has increased conversions by up to 30 percent.

There is a huge catch. They don’t know how it works.

They know how it’s programmed but they don’t know how it has made a decision, what data it has used to do so, or why it’s made that choice. The CEO of Google told CBS they don’t know how their’s works either. This is a huge red flag for me.

My reading of this leads me to think for example, that if someone left NHS records available on the internet, the LLM could have used this data and the likelihood is Facebook would not know.

It’s also theoretically and actually possible for a polling model showing people who are racist to be fed into the LLM and then the LLM decide to run adverts about crime with people of the race that the model has decided the recipient doesn’t like.

As an experiment, I interrogated Chat GPT and it made me adverts targeted at people who I said don’t like immigration and people of a certain race to convince them to support my policy of increased criminal sentences.

This was possible in Chat GPT, despite their safeguards, because I did it. And others also did it and replicated this. I put in information about different types of people and their prejudices and attitudes and asked Chat GPT to use this information to target adverts at them and it provided me detailed suggestions for adverts.

It also makes mistakes in a process called hallucination where it confidently asserts things that aren’t true.

Models like Chat-GPT are also publicly available. Anything put in there is fair game. Its knowledge cut off, it says, is September 2021. But I was able to put the Time article from January 2023 into it and it gave me a detailed and full response.

I also used an app using the GPT-4 API about the local elections and convinced it that En Marche, the old name for French President Macron’s political party, was standing for election in Lincoln. I then asked GPT itself this question and it repeated this erroneous information, which it had now internalised as truth.

So where does this leave the dangers? I’m going to focus on political campaigning and advertising here but there are very real risks about the development of LLMs in general. As I’ve set out I have significant experience with political data campaigning and we must deal with this issue now.

Many people are looking at its political and advertising applications and I think that the Government should consider banning their use whilst rules are developed.

Here are some of the risks.

1. GDPR — automatic decision-making for targeting

If you take a look at these articles from the ICO on automated decision-making — particularly in political advertising — you will see a key requirement is that you know how decisions are made with the possibility of human intervention. This immediately rules out many of the big LLMs and would indicate that actually what some of them are doing could be considered illegal.

2. Publicly available info and GDPR implications

Any information inserted into a large model like Chat GPT is immediately sucked in by it and potentially available to others and also for use by Chat GPT. For example if you put in data that data could be used by Chat GPT to make calculations. This can be resolved by having your own private instance where data is contained and private which I think is the only way organisations should be using LLMs. This has such huge privacy implications as well as it potentially not being GDPR compliant. Anything put into Facebook’s advertising model for example could be used by its LLM.

There needs to be a revolution in how people secure personal data to ensure it cannot be found by and used by LLMs.

3. Cambridge analytica on steroids

Above I have outlined my approach to campaigning but this is by far not the only way to do it. In fact one of the key reasons we are no longer in the EU is because of a certain D Cummings and his use of psychographic advertising.

What Cambridge Analytica and companies working for Cummings did, is use data to work out the personalities of different voters. This was more sophisticated than previous methods, because unlike ‘mosaic’ groups, the traditional advertising techniques to identify a group of different ‘personas’ in the public, it included far greater detail, harvested from social media.

As explained above, left and liberal parties have to create a valence (emotional connection with their leaders or party) in order to win. Right wing parties don’t — they just have to create a valence with a fear or reaction or a sense of injustice. Eg the perception that someone or a group of people is getting something they are not. For example — like immigrants getting jobs. In fact by taking the negative approach Cummings did, if you are a left or liberal party, it opens the door for right wing parties to change the election against you by damaging the valence with the electorate. In some cases people like Donald Trump and Boris Johnson can create a valence with the electorate through the use of fear and other emotional constructs that make people think they are on their side.

In his book in the Place of Fear there is a quote by Nye Bevan, the founder of the NHS that is similar to the following but I’m paraphrasing. “you either see poverty using democracy to overcome private property or you see property scaring poverty into destroying democracy.”

In this context Cummings worked out you could run ads to different personalities to create an emotional reaction that impacted their voting behaviour.

Over the years ads like the famous Willie Horton ad saying Dukakis was soft on crime helping Bush win in 1988 have been used to scare voters. In the UK in 2015 we all remember the Lynton Crosby ads of Ed Miliband with Alex Salmond pulling the strings.

Now in the context of LLMs this is frankly alarming. As I’ve demonstrated above many of the LLMs aren’t fully understood in reference to how they make decisions. It’s entirely possible, in fact likely, that LLMs used in advertising and political campaigning would start using these tactics and make Cambridge Analytica look like a Conference South football team.

It’s theoretically possible that everyone could have access to Cambridge Analytica-style psychographic targeting but with the entire data set of Facebook, simply by using their advertising platform. I’m not asserting this happens, I’m saying it’s possible before I get an angry letter from Meta lawyers.

This academic study shows that an LLM was able to significantly influence people’s written opinion and their stated opinion to a terrifying level.

4. Bad people doing bad things with advertising

There is nothing to stop extreme right wing parties using LLMs to run inflammatory and hate filled ads to stoke up racial tensions. A foreign country for example could run adverts persuading members of a US political party to oppose the U.S. taking a certain policy, that then forces the leaders of that party to change direction. Bad people could make their own LLMs without any safeguards and rework our entire democracy.

5. Bad people doing bad things with LLMs

There is also nothing to stop a foreign country for example building an LLM that ingests the whole of twitter and then creates millions of twitter accounts to have mass scale one-on-one convos with people likely to vote in the a primary in the US persuading them to oppose the U.S. taking a certain policy meaning any candidate running has to do the same in order to win. This is in fact what certain countries are likely doing. Given the impact the above stated study shows LLMs can have this is very worrying.

There is another study here which shows how LLMs can set up their own lobbying operation.

Where do I think that LLMs can be used constructively

1. Private research tools with your own instance and database

You could train a restricted LLM with limited training and safeguards to collect mass scale research and provide reports. Your organisation would be the only people with access to it.

2. Private LLMs for support and help with own instance and database

The NEU for example could use an LLM to provide employment advice to teachers with appropriate safeguards in place

3. Analytics where you know how it works

If you can understand how it works then you can use it in a private way as described above. As I was writing this article I became aware of a new release called ChatGpt Code Interpreter. If you click on this twitter thread it shows how you can use Python code to create graphs and images in a split second.

What should the Government do?

In the words of the recent BBC Drama Blue Lights they need to “take a beat” before this goes any further. In writing this article a friend of mine who is a lawyer in Iowa, suggested that in the same way big pharmaceutical drugs are extensively tested before release, this is what we should do for LLMs.

I suggest the following steps.

1. For now suspend the use of LLMs in all political campaigning until a public commission the Government establishes is able to define rules that are fair and protect our democracy

2. Ban LLMs automating social media content or the content of adverts

3. Ban LLMs targeting where it isn’t known how they do it

LLMs will change the world as we know it forever. It’s vitally important that they are fully understood and regulated.

I will be devoting most of my time at work to understanding how they work and can be used practically.

Feel free to drop me a message if you want to talk further about this. Thanks for reading!

Appendix

The Communicator system is a CRM built by Changelab which links to the NEU membership system in real time. It also links to our data pipeline that analyses the engagement of NEU members doing things like analysing what they are interested in, what events they go to, what campaigns they have taken part in, to give us an understanding of why people are or aren’t taking action. We supplement this with a proprietary polling model provided by Deltapoll. Members opt into this with consent making it GDPR compliant.

This links to NEU activate our digital organising platform used by over 300,000 NEU members which allows us to quickly collect and disseminate info and allows reps to see this info and record who they’ve spoken to and who has or hasn’t taken certain actions.

This all feeds our data pipeline and displays in real time on our data dashboard and through modelling allows us to see exactly the impact or not of our campaigning. What messages work with what member etc.

This is deployable in Communicator by ad, text, phone and email and is soon to be completely automated.

--

--