Who will we talk to tomorrow?
“Hello, I’m Michel I’d like to book an appointment for a men’s haircut tomorrow around 10 am”. What if Michel wasn’t Michel, but a Google Assistant algorithm. On the other hand, the correspondent obviously doesn’t know it… Then who will we talk to tomorrow?
The assistants today
Today, voice assistants can perform countless tasks, from basic functions such as providing weather, news or traffic, to organizational features such as scheduling an alarm clock or timer, adding a reminder or organizing an event, to music management such as launching a playlist, song or lowering or increasing the volume.
However, the first versions were far from these skills, in the 1960s, IBM’s Shoebox only knows how to recognize 16 words and the numbers from 0 to 9. But their development was rapid and in early 2010 they arrived in our smartphones. Nowadays, it is in the connected speakers that they are increasingly integrated, the latter being themselves connected to the furniture in order to have total control over their home thanks to voice.
There are also initiatives like Amazon Alexa’s Skills that allow everyone to create their own request and the answer that will suit them: “Who is the best dad in the world?”, “It’s you”. All this happens locally on your own assistant, you get to personal use. But we can see that in general, the uses are multiplying.
However, according to a report by Smart Speaker Consumer Adoption, users only ask basic things to assistants, managing music or getting information represents more than 50% of requests, and this, in a very rare way, a few times a month on average.
So why do voice assistants offer us more and more possibilities of requests, while at the same time users seem to have only a very reasoned use of them? For the “early adopters” who do everything with their voices, even toasting a toast, yes, but above all for another reason: to collect information about us.
The loss of freedom
Because yes, by analyzing and learning about us, the assistants know us better and better, they can then adapt to us, i.e. be able to propose to us, suggest new things that will be more likely to suit us, to indirectly or directly make us consume, buy. Serge Tisseron says: “We are so used to machines simplifying our lives that we are ready to give them a part of our freedom”. Even if today the majority of humans have not tamed vocal assistants, it is true that technology, in general, makes our lives easier and easier, sometimes even in very extreme ways.
The best example is Amazon, which in 2013 filed a patent to deliver the platform to consumers before they ordered. More specifically, an algorithm analyzes the user’s navigation data and based on the items viewed, and then the items most likely to be purchased are placed in centers or relay points as close to home as possible. So if the user buys one of these items it will be delivered in record time, i.e. a few hours, or even minutes.
So as Serge Tisseron says, we can then lose our freedom, but in two ways, first, if the assistants know more and more about us, the companies behind us too, and they sell them at expensive prices, and if information about us is sold, a part of us is. But also, because if everything we do on the Internet, but also through our connected objects, is scrutinized and analyzed and then proposed to us predefined things by an algorithm, then we are trapped in a kind of digital destiny.
Are we really ready to enjoy a happy life, pretending not to see that behind it lies a false freedom, where we are openly spied on and analyzed?
The human being is replaced
Another reason why the assistants want to know more and more about us is that in the long run, they could replace us completely. This reflection may be laughable, but there are already many technological projects aimed at replacing humans. The purpose of these is often, for companies, to earn more and more money, because for a specific task, not requiring too much thought, a robot will do it better and especially faster. But on the other hand, a job is obviously destroyed, so it is always a delicate subject (but not ours here). This happens in all areas, even those that are not expected. In fact, in 2015, the clients of an adultery site discovered that the people with whom they had erotic conversations were not women but rather a female chatbot with predefined answers.
But it may be that large technology companies will push this trend even further. In May 2018, during its Keynote IO18, Google surprised everyone by presenting 2 telephone recordings, during which we heard a man book an appointment at the hairdresser’s, then a table in a restaurant. Nothing extraordinary, but we learn just after that it is actually Google Assistant, the company’s voice assistant who made these 2 calls. The language was very developed but punctuated by hesitations, in order to make the answers credible. Intelligence has even managed to overcome complex situations, such as problems with schedules or the number of chairs. The conversations were so credible that if they hadn’t specified that it was the assistant, no one would have guessed it. The problem is precisely there, the people on the other side of the phone were certainly not aware that they were talking to a robot.
Are we, therefore, ready to delegate more and more tasks to our assistants, even if, in the long run, we no longer know who we are talking to?
Knowing the nature of the interlocutor
This question is of interest to him because, on knowing the identity of our interlocutor, the nature of our exchanges depends on it. We don’t talk the same way to a friend, a teacher, a parent… and therefore to a robot. In 2016, a San Francisco blogger wrote an article called “Amazon Echo is magic, he also turned my child into an asshole”, literally. He then denounces the fact that in general, we speak to the vocal assistants as if they were slaves: “Give me the weather forecast”, “Fill the fridge”, “Tell Dad to come and get some bread”. At least, the use of the imperative in the oral, often assimilated to orders, gives this impression. But it raises something else interesting, when each of us was, for the first time, confronted with a vocal assistant, no matter who he was, we necessarily insulted him, mistreated him in any way, let us not be ashamed, even professional journalists did it. This is natural, it is due to the fact that, when we are faced with something unknown, we will tend to test these limits, in order to know the subject better and make it our own. To come back to the blogger, he then tells us that he is afraid that his child will get into the habit of talking like that and that he will end up no longer being able to tell the difference between exchanges with Echo and with his parents so that they will give them orders. Amazon (which has an answer for everything) had thought of that, by proposing Amazon Echo Dot Edition which, in addition to its far too long name, thanks the child for politely asking him for something with a “please”.
This is obviously something to smile about, and it is not Serge Tisseron (again), who will tell you the opposite. Because in 2018, in an article for the Huffington Post, he completely counterbalances the theory of the American blogger. He says that the majority of children already do not talk in the same way with a friend as they do with their parents. Indeed, we are not going to say “I wanna puke” or “It sucks” to his father. This is because children have been taught to differentiate between the two and to show respect when speaking to someone older, hierarchically superior or unknown. According to Serge Tisseron, we must then do the same with the vocal assistants, let’s teach our children that we don’t give orders to our parents as we do with a robot. For him, it is not necessary to teach them to be polite with them, it would create confusion, indeed a child sees the difference between a connected speaker and his mother. To counter this trend of replacing humans with voices or assistants, we would then have to claim the right to know to whom we are addressing ourselves so that we can adapt our speeches and our exchanges will only be better.
So let’s teach our children to be assholes to the assistants! More seriously, we must learn to live with them, knowing that they are different and that our relationships will then be different. This requires knowledge. Once this is accomplished, we can live with serenity, and even the most skeptical, a normal phenomenon in the early days of a technology, can attest that they make our lives much easier.
But this utopia could be challenged faster than expected, today, in addition to knowing more and more about us and replacing us, robots are beginning to resemble us, both physically and mentally, with artificial intelligence in particular. Sophia, an intelligent robot, even obtained Saudi nationality in 2017. So how will we address people who, apart from not being alive, are totally similar to us?