If you’re anything like me, your main point of reference in terms of Artificial Intelligence is numerous sci-fi movies; your immediate response, when trying to grasp AI, is therefore likely to involve a slew of malfunctioning, evil robots possessed with the idea of erasing human kind, whilst adapting an increasing number of human traits themselves. But for Casper Wilstrup, CTO at Blackwood Seven, AI presents a much more harmless and profitable part of everyday life. So, if not evil robots, what’s it all about then?
Turns out, I wasn’t far off in terms of machines adapting human traits …
ALS: So … how did your interest in AI begin?
CW: I have a background in physics and previously worked many years in IT creating adaptive systems, among other things. In terms of AI, it’s an interest I’ve grown over the past 5 years.
ALS: And when did the general notion of AI begin?
CW: During The Industrial Revolution, we were able to build machines capable of acting in a predictable environment. A very early example was the railway system, which enabled a train to travel from A to B, because it was a completely predictable task. That’s all the machine had to do. Automation-processes brought us to where we are today: an increasing number of machines are performing in a very advanced manner but still within a completely deterministic frame. So, when speaking of a factory robot left to its own devices, or a spaceship flying to the moon, entirely without human interaction, it’s just pre-programmed systems designed to perform certain tasks in line with the prerequisites that the programmer or system-designer knew of when he made it. And that’s possible within certain scenarios, and then there are others, where that sort of approach doesn’t work. And in that respect, we’re still mostly where we were, when The Industrial Revolution began: we’re still struggling to create systems capable of solving tasks that can’t be solved with a pre-programmed system.
ALS: What sort of task could that be?
CW: A typical example is driving a car. Sending a rocket to the moon is, perhaps, technically more complicated, but it’s a pre-programmed task. Once the rocket is launched, you’re able to predict what will happen as well as the correct response. But we, who work with AI, are just now learning to build machines able to act in an unpredictable environment.
ALS: Which is how you define AI?
CW: Artificial Intelligence is building machines capable of navigating in an environment and acting sensibly in a situation they have not previously encountered. There is no acknowledged definition of what AI is, or of intelligence for that matter, but for me it’s about agency. There’s an old debate in Philosophy of Mind, which is: what is the difference between consciousness, free will, and agency? Of those three things, I connect intelligence with agency — the ability to act. So, when I speak of Artificial Intelligence, I speak of something with agency. My definition of AI aligns with what is commonly known as Intelligent Agents. To some, AI is about building a simulated brain — something capable of learning given data — ‘deep learning’ is the leading technology here. While deep learning is groundbreaking, I don’t see it that narrowly. I view intelligence as something that, by definition, is embodied. And that’s an important term to me: Artificial Intelligence is an embodied intelligence. A computer-program capable of acting in a virtual world may not have a physical body, but it does have a virtual body, and it has the ability to act within the environment in which it exists. So, when I speak of AI, I’m always referring to Intelligent Agents. I don’t view anything as being intelligent, if it doesn’t have the ability to act and learn from the outcome. In an unpredictable world, which the world in which AI is relevant is, the ability to act is key.
ALS: And that uncertainty must make it virtually impossible to take everything into consideration, when trying to predict what might happen?
CW: You can’t rely on any rules, so you have to make a machine with the ability to observe its environment and learn from those observations. In ’machine learning’ we sometimes speak of ‘unsupervised learning’. This means that there’s no other kind of intelligence present, guiding the machine, where the situation occurs. Just like I’m not present and guiding my youngest daughter when she’s crossing the street and encounters an obstacle. She finds a solution on her own. When making Artificial Intelligence, you have to make a machine capable of dissociating from a specific situation to a general occurrence — and capable of concluding what type of strategies to draw from and making adjustments accordingly in order to solve the problem. So, you can’t tell the machine how to react. It has to do it on its own. The machine changes its own behavior according to what it experiences and senses.
ALS: Which is what separates the machine from us — the fact that Man is adaptive by design.
CW: Yes. Humans are a primary example of an adaptive system. When making Artificial Intelligence, you try to replicate some of the methods that enable a system to act sensibly in an unknown situation.
ALS: And how do we ensure that the machine acts sensibly, thereby preventing a negative outcome?
CW: The keyword is ’purpose’. In order to program an AI for anything sensible, you need to program it according to a purpose, because you can’t program the method. So, what is the machine meant to find out on its own? When I send my daughter to the supermarket — that’s the task she is meant to solve. She’ll find a way to cross the street even if there are obstacles and may even end up choosing a different supermarket. She can abstract from the procedural element of going to the shops, because she knows the purpose of the trip: to get butter.
ALS: Speaking of purpose … what is the exact purpose of AI at Blackwood Seven?
CW: We produce a virtual robot capable of observing key data from which it can form an opinion and a strategy. It runs in a loop, observing its environment and analyzing those observations, which allows it to improve its strategy over time and observe its own actions. It’s able to form a strategy in terms of how to change sales according to, for instance, a client’s TV-spend. At Blackwood Seven, we’re able to model an ’interdependent response-curve’ meaning that there’s a curve with some sort of generic S-shape: you spend a certain amount of money, and at some stage it’ll begin to work; then you’ll reach a slope, where things are quite linear — the more I spend, the more it works, and then it’ll begin to bend at some point. And it’s interdependent, as it is in itself linked to how much we spend on display-advertising, among other things, because we think there’s something called synergy, meaning the more you spend on TV, the more efficient the money you spend on display will be. Those two are quite simply connected. It can be determined by using a certain shape, and then basically asking the machine to find out what that shape looks like in relation to a particular client, thereby drawing a relative response-curve for TV, given a specific display-spend.
ALS: Do you experience any resistance towards the use of AI in terms of marketing-spend? And if so, is it perhaps well-founded? One might argue that a certain amount of reluctance towards a machine making important decisions is just.
CW: Letting our robot, our AI, make decisions outside the familiar territory, is groundbreaking. Here’s the thing: Clients have grown accustomed to a 21-year-old media planner doing the work and attributing it to so-called “experience”. It’s harder to accept the fact, at least more controversial, that it’s a machine doing the work. But consider this: the machine has seen thousands of campaigns; it has seen the effects of plenty of advertisers’ TV-ads; it’s capable of observing the result. How many 21-year-old media planners are able to tell, whether the client’s TV-spend actually works? Our machine can, and as far as how bold the machine is allowed to be, it’s all a matter of calibration within the AI. The same goes for a human being.
ALS: What does the future hold for AI? Will AI be an increasingly present part of marketing in 10 years?
CW: We haven’t really covered the subject of creativity, and in that respect, it’s going to take much longer than 10 years till any AI is able to do the creative work in advertising. When I retire, it will still be humans figuring out how to create something that resonates with people. Our models can’t handle that sort of thing; they can’t detect, whether adjusting the creative input slightly will make the ad go viral. But I do think that anything that’s data-driven, such as increasing and decreasing spend on media, will be controlled by AI in 10 years. The machines are able to survey thousands of data-series and learn from thousands of previous campaigns, and the 21-year-old media planner just isn’t. The machines are quite simply much better at it.