We’re living in an new age of discovery and enlightenment. It’s hard to think so with the political climate in the world, but I’m an optimist. One of the reasons, is that technological innovation is driving significant changes around the world. Based on World Bank statistics, the number of global internet users per 100 people, grew from 15.789 in 2005 to 43.998 in 2015. That’s almost triple the amount of global internet users in 10 years! If you look at the country breakdown of internet users that the World Bank provides, it shows that most of the growth comes from countries in the Global South. A considerable number of these countries are politically fragile with many of their citizens emerging from extreme poverty, but these emerging markets are full of new individuals who will interact with brands in different and novel ways in the years to come. In the Global North, most of us are privileged that our struggles are not with the necessities of life. Instead, we have the ability and capital to focus on developments that have the potential to alter the whole world for the better. Innovations such as artificial intelligence (AI) and machine learning are developing at rates that were previously unthinkable.

In 1950, British computer scientist Alan Turing, now made famous by Benedict Cumberbatch’s portrayal in The Imitation Game, devised a concept known as the Turing Test. The Turing Test states that we will have achieved true AI when the exchanges between human and machine are indistinguishable. We have not yet reached this point even though we have been trying for a while. The term artificial intelligence was conceived by John McCarthy in 1956. The term describes the ability of machines to think. Machines are capable of processing logic and have been able to for quite some time. We can program machines to understand if this then that statements and the machines will happily process away using the parameters given to them.

Source: Allianz Global Investors

Logic and thinking are not the same things though. In the United States, artificial intelligence research has gone through a couple of “winters”. These winters appear to coincide with economic winters in the US. The first AI winter was from 1974 ­–1980 and this coincides directly with the first and second oil crises (page 6). The second AI winter was from 1988–1993 and this was during a recession and a U.S. presidential election. During times of economic hardship, governments often reduce funding to scientific endeavors to funnel funds into areas of immediate growth. Since 1993, to use an Olympic metaphor, private companies have taken the baton where government handed it off.

Sci-fi has been toying with the idea of AI for a long time. In Isaac Asimov’s I, Robot, devised the three laws of robotics that govern how robots will interact with humans. This shows in Asimov’s future, that robotics has transcended mere logic and can think. We have not yet reached this point but science fiction is becoming a reality. For AI developers, the concepts of roboethics, ethics of AI and machine ethics are all a real consideration because of the reality of AI in today’s world. AI is having its time in the spotlight and there is not one day where you will head to Wired’s homepage and not read an article about AI (not saying it’s a bad thing)! Right now, there are several AI technologies and players profoundly shaping the face of the AI. IBM’s Watson was originally introduced to the world in 2011 when it was a contestant on the quiz show Jeopardy and handily beat the two best human contestants the game had ever seen. Watson is growing up and it’s growing up fast. Since 2011 it’s been learning as much as it can and most recently worked with a producer to help write a song. Watson isn’t the only AI in the game though. Google owned DeepMind has a few tricks up its sleeve too. In September 2016, AlphaGo using DeepMind technology became the first ever machine to beat a human at the complicated board game Go. Go is all about intuition and intellect and AlphaGo, channelled its knowledge to unseat the #1 world player in the world. Most of us know Elon Musk for his sleek, electric powered Tesla vehicles but he’s also into the AI game with OpenAI. OpenAI’s mission is: to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. To achieve this, OpenAI has partnered with companies like Microsoft to bring more robust AI research into the world.

So, that’s where we are with AI in general. Within the sphere of advertising, we are seeing advertising become more specific and targeted as advertisers use technology to develop a better understanding of the people’s preferences and habits. Previously advertisers only had the medium of television to convey their messages. Measurement of exposure and frequency did not begin until the person had seen an advertisement three times. Frequency theory was replaced by recency theory which stated that the most recent advertisement was where the engagement and conversion began. In an increasingly digital and digitally fragmented world, advertisers need to be more creative to reach their targets. Even when advertisers have found their targets, people are generally becoming savvy about advertising and do not want to engage with the advertising that is presented to them. According to a recent eMarketer survey, over 70% of US adults try to skip an ad as soon as they can.

Additionally, consumers appear to be tired of the ads that are bombarding them on all platforms and find ads not only disruptive but actively tune the ads out. As shown by the success of ads like the Gatorade Super Bowl Dunk snapchat filter, consumers are willing to interact with a brand if it’s done properly, there’s a relevancy to the content and brands can be where the consumer is, as they need it and when they need it. Brands need to understand what the consumer needs and when they need it to remain relevant and be successful.

How did we get here and what’s keeping us where we are? Well, for one there’s this thing called Moore’s Law. Moore’s law is all about the doubling of processing power but it’s reaching the end of its lifetime. Soon there will be nothing to double but we will need new ways of processing; processing in more complex and human ways. The human mind can make immensely intricate decisions and although machines are able to calculate faster and not get tired, they are still working on the decision-making ability. There are many legal obligations and ramifications to the power of AI. We’re currently struggling with aspects of privacy especially in regards to our phones. Governments and private interests are in a tussle with each over consumer data. Some countries in the world, like Germany have very strong and strict privacy laws in favour of consumers whereas in countries of the Global South, issues and questions of privacy are lax. After years of Hollywood movies depicting robots as terminators here to kill us all, there’s a fear of the unknown and the robots taking over.

One of the areas in which the ability of AI is excelling is in healthcare. In the UK, DeepMind recently partnered with the National Health Services, the UK wide public health system and received access to healthcare data for over 1.6 million patients from the system. In the UK, the news was met with concern from some who feared for the data privacy of those patients whose information was shared and the millions of other patients within the NHS system. The data that was shared with DeepMind from the NHS was specifically for patients with kidney disease and is a pilot project. DeepMind will have access to the data through a five year trial. Technological progress is giving medical healthcare professionals an edge in treating their patients because according to news sources, this partnership will significantly lessen the load of paperwork on the healthcare professionals. The BBC reports that as early as next year doctors at selected hospitals will be “using a mobile app called Streams, which will initially alert clinicians to patients with signs of acute kidney injury at its earliest stages”. AI is giving doctors the ability to understand their patient’s symptoms and begin to diagnose potential kidney issues significantly earlier than has ever been possible. With the data that DeepMind has access to, it will be able to help improve health outcomes by analyzing the data, finding patterns and informing doctors of these patterns which will lead to better medical treatments not just for the kidney patients but for all patients. Due to changes in demography, especially in the countries of the Global North, the population is aging quickly and diseases of aging will begin to become more prevalent in the population.

As mentioned before, one of the most recent ways the Watson AI got to work was helping Grammy-winning producer Alex Da Kid write a song. It is important to note that Watson did not pen the song by itself. IBM says that Alex Da Kid was looking for a deeper connection with his audience, he wanted to create something emotionally appealing and that’s where Watson came in. For the project, Watson analyzed over “five years of natural language texts including New York Times front pages, Supreme Court rulings, Getty Museum statements, the most edited Wikipedia articles, popular movie synopses and more”. By analyzing these texts, Watson began to learn about the most important cultural themes. Two different Watson technologies were used for the song writing process: AlchemyLanguage and Tone Analyzer. IBM describes the Watson Tone Analyzer as a technology that “uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style”AlchemyLanguage is a series of Watson APIs that conducts text analysis through natural language processing. IBM states that AlchemyLanguage “can analyze text and help you to understand its sentiment, keywords, entities, high-level concepts and more”. Language is a crucial aspect of culture. Culture is the way a society transmits its beliefs about itself and language is integral to framing this belief. Globalization is contributing to greater cultural flows leading increased deculturation or loss of culture. AI can already analyze language and find patterns that describe and capture the emotions, desires and mindset of a culture at a particular time. Alex Da Kid wanted to create an emotional song and the pieces of text that Watson analyzed and chose, evoke a certain sadness in the production of Not Easy. Fast Company went as far as describing the song as “emo”, a term usually reserved for the songs that angsty pre-teens and teenagers listen to when they’re upset at their parents. Perhaps, what this importantly indicates is the Watson AI’s current ability to understand emotional patterns and perhaps a future ability to properly comprehend what those emotional patterns mean.

We get a lot of sci-fi material about how the robots are coming to kill us. So far, we’ve done much more to decimate our planet than robots ever have. Our world is a great state of environmental decline. National Geographic estimates “a rate of 100 to 1,000 species [is] lost per million per year, mostly due to human-caused habitat destruction and climate change”. Let’s just think about that. It doesn’t seem like a lot but every year the human population increases we are losing the biodiversity that makes our planet unique in our solar system. It’s also crucial to note that the reason behind those species’ extinction is twofold: us and climate change. Not to get all doom and gloom either but although climate change is supposed to be a natural cycle, NASA says “Most climate scientists agree the main cause of the current global warming trend is human expansion of the “greenhouse effect””. So, humans are doing a double whammy on planet Earth from wiping out biodiversity to accelerating climate change. There’s also room for AI in learning more about the planet and potentially combating environmental decline. In the U.S., the National Science Foundation is creating a “a 3-D living model of the entire planet. Called EarthCube, the digital representation will combine data sets provided by scientists across a whole slew of disciplines — measurements of the atmosphere and hydrosphere or the geochemistry of the oceans, for example — to mimic conditions on, above and below the surface. Because of the vast amounts of data the cube will encompass, it will be able to model different conditions and predict how the planet’s systems will respond. And with that information, scientists will be able to suggest ways to avoid catastrophic events or simply plan for those that can’t be avoided (such as flooding or rough weather) before they happen”. EarthCube is going to change the way scientists understand the Earth’s various systems and give them an opportunity to share data with each other and act on predictions to help our world.

AI in advertising is already starting to take shape with the introduction of IBM’s Watson Ads. Right now, Watson Ads is rolling out with some beta partners. The first partner was Campbell’s who offered personalized recipes based on the user’s location, the weather and various ingredients. The Watson AI being used is Chef Watson to analyze all the variables and make suggestions that the user may find useful. The weather aspect is key for Campbell’s since, as a soup company, the weather strongly dictates when people are buying soup. Other partners for the Watson Ads system are GlaxoSmithKline who will be using Watson Ads to promote their Theraflu brand of flu medicine. In 2017, Toyota will be the next partner and first auto maker to partner with Watson Ads. IBM bought The Weather Channel which is why the weather seems to be a critical component of their Watson Ads API and platform.

Beyond what is already happening in the space, companies will need to stop pushing so much and think about how to converse with the user. Campbell’s offering recipes is a good estimation of what the user may be looking for at the time but it’s still a guess. Users are willing to invite brands and connect with them if they are providing useful information and understand what they are looking for at that moment. In the future, companies will need to consider how they can use AI to understand how they will create a seamless experience from waking to sleeping. Currently, advertisers define integrated campaigns as campaigns that reach the user on platforms like mobile, social media and laptop. In the future, convergence means integrated will not only look different but the touchpoints brands and advertiser’s users should include alarm clocks, mirrors, watches, cars and windows. The user will be hyper-connected so brands and advertisers should be too. This does not mean reducing the consumer to mere data and screens though. AI will give brands and advertisers the power to analyze information and help understand who the consumer is, what they are like, what they like and what they want to see. AI gives brands the ability to learn about their users from a personal perspective and be part of a mutual conversation. Relevancy and timeliness will be key in delivery of the messages and brands advertisers want to communicate. For example, if AI analysis knows that a user wakes up at 6am, they enjoy drinking lattes, and pass Starbucks on their way to work, Starbucks could offer the user to download a coupon into their device for their favourite Starbucks latte on their way into work as way of enticing the user to Starbucks. The benefit here is that the user has the option to accept (giving an option, instead of pushing) a coupon for Starbucks (a useful advert) on their way to work (it is not disruptive to their daily life, in fact it fits seamlessly).

The integration of AI into everyday items could be key to public acceptance of the technology. For example, if what we think of as a TV was a multi-touch, multi use screen with built in cameras and AI, the issue of what to watch with friends could be dealt with more easily. Using the camera, the AI could “read” the room, analyze facial expressions and emotions then make suggestions based on the consensus. From there, since the group are already in a mood to watch something there are opportunities for providers like Amazon to use the knowledge of what each of the users like to snack on to ask if the users would like to input personal orders to their system and have those orders delivered to the door. Another opportunity to integrate AI and advertising is through the development of smart home technologies. AI could set a geofence around user’s homes so that it knows the user’s entry and exit patterns and begins to anticipate their needs. When they are on their way home, depending on certain factors they could receive branded recipes on their way home or suggestions for restaurants. The synthesis of AI into advertising could create an experience that gives the user the option to seamlessly integrate brands into their lives in a meaningful way.

One of the biggest problems and ethical issues with AI in advertising is privacy. There is the question of how much users are willing to give up to have a machine understand, learn and anticipate their needs to have timely, branded content served to them. Additionally, there is the potential for the AI to be easily hacked if it is not updated. A potentially workable solution could be over-the-air updates, in a similar fashion to the way Tesla updates its car software. Another issue would be the pushback from users about how intrusive they find the brand involvement in their lives. In the future, there could be a swing back and users want even less brand involvement in their lives and therefore advertising to such a personal level could be seen as making a desperate effort to seek conversation and engagement.

In a very optimistic view, these technologies could be integrated within 10 years. If users continue to accept the involvement of brands in their lives and accept privacy terms and EULA’s without scrutiny. In a pessimistic view, another deep recession will occur and as history has shown, AI development tends to summer during these economic winters. In that scenario it could be 12–15 years before AI can truly develop into a properly intelligent machine. In a disaster scenario, the current experiments that DeepMind and Watson are undertaking will fail or there will be massive data breaches during the experiments. This will cause the public to lose faith in the abilities of the private sector to achieve artificial intelligence and the current public interest, drive and support behind AI development will cease. If done well, AI will allow brands to understand and build conversational interaction with users since right now most users are bombarded with often irrelevant ads.