Chapter 15: The Cave

This is the final, fifteenth chapter from my book “Hippo — The Human Focused Digital Book” which is available to buy now on all Amazon Platforms.

‘Man — a being in search of meaning.’ Plato ­

Behaviour is the range of actions and mannerisms made by people in conjunction with themselves or their environment, which includes the other people around as well as the (inanimate) physical environment. It is the response of a person to various stimuli or inputs, whether internal or external, conscious or unconscious, overt or covert, and voluntary or involuntary. From a human focused perspective, behaviour consists of four key elements: a person, operations (actions, activities) and interactions (relationships), and their properties. A behaviour can be represented as a behaviour vector, as all behaviours of a person or group of people can be represented as behaviour sequences and a multi-dimensional behaviour matrix.

From clocking our steps each day, to online shopping and finding recipes for cupcakes, we all use apps. Certain apps are able to track our behaviour and then predict purchasing habits and then prioritise things for us, such as results. As it stands, this is currently quite hit or miss by tracking cookies while actually anticipating requirements based on preferences that have previously been mined and understood based on who you are and what you want, and although the cost of development here is considerable, it could be that soon we’ll be buying phones with apps that employ this kind of behavioural analytics to deliver an experience that is personalised, via the tech that we’ve built to actually understand a person (as opposed solely to what an ad campaign is pushing). Behaviours will be stored and shared across platforms, until eventually the experience of living in a city becomes as breezy as walking along a beach, free we will be again to walk as we won’t be caught in the web of work we need to do moment to moment while it drowns us. Apps that communicate between one another, coupled with a lack of actual interfaces to process the data, seems to rule out the Cloud based manipulation of the data. Instead, it is more likely that operating systems themselves will extract the information and behavioural data and then store it in what has come to be known now as The Cloud, and there it will be kept while being available too, as a profile of the person. That means this: if a person adds a new fitness app, it will have access to their profile to update preferences and trends and personalise the product experience in a way that is completely immediate.

As anyone with an Apple Watch will tell you, though, it’s all about the notifications, not the apps. The taps on my wrist make it abundantly clear just how dumb most notifications are. Fiona Spruill, VP Product, Meetup

Where most notifications use our location, our schedules, and our reading habits, to themselves upgrade, the ‘now’ type mentality spreads but once behavioural analytics and machine learning marry human focused principles, the now mentality will leave and a world where notifications make us better people has space to develop.

What was the last new app you installed where you actually appreciated the notifications enough to keep them on? Mike Davidson, VP Design, Twitter

The next step in the evolution of conversational interfaces, powered by Ai and triggered by emotions in your data will be embedding functionality into conversations, allowing people to call a taxi or order flowers right from inside the messaging interfaces. Tasks like this would have traditionally fallen to apps, most of which were shoved into some dusty folder in the corner of your phone for that one time a year you actually needed to launch.

The use of such apps is labour-intensive as things stand. Sending flowers, for example, must run through a process that demands things from us, and by the time we have downloaded the app and selected and placed the order, we then deal with the next obstacle because they are in fact now sold out, or the shop is closed. The next generation of interfaces, such as Siri and Google Now, will replace this untidy vehicle with a smoother ride as spontaneous interfaces using Ai emerge, drawing on people’s profiles that have been built and analysed in preparation for even the most difficult of nightmare tasks. Imagine for example, the work that can be lifted from our shoulders when buying a house, trying to move abroad or in time, the most effective way to manage an entire business. The worry that we then won’t have anything to do because everything is being managed for us is misleading. Technology can in fact free us to realise who we are and what we want to actually do. Facing this is the route of the concern because dealing with the reality of our own dharma, our own life missions, is too terrifying for us, since then we would have to make that film, write that book, learn the piano or open that restaurant — but in an attempt to bury our callings, we repress technology’s potential to free us.

If your product had a voice and it was talking to your customers twenty-four hours a day, what voice would you want it to have? One that is polite? Strict? Funny? Would you want it to come off as paranoid or trusting? Modest or likeable? View your product as a person but unlike friends and family — here we get to choose what voice he/she has. The personality you build in turn guides the copywriting, the interface, and the feature set. Whenever you make a change, ask yourself if that change fits your app’s personality. As we move away from screens, the personality of your product will have to become more automatic, anticipatory, and predictive.

The use of virtual assistants such as Siri, Cortant, Google Now has all come about due to advances in machine learning and natural language processing, which means that we can frame our enquiries in the sort of language that we use in daily living. We no longer find it strange to seek answers from inanimate helpers — many of our devices, including wearable technology, incorporates this facility — and we don’t need to issue standardised commands to get the responses we need. We do need to initiate the help, however, and the futuristic, omnipresent invisible assistant remains the creation of science fiction for now — while the question of allowing a form of autonomy to machines includes weighty considerations such as whether they will act in our best interests and that will always be attributable to us, who we are and our collective psychological condition. In order to answer these questions relating to power balance and trust, the designer needs extensive knowledge of human behaviour and this is where the psychologist’s knowledge of natural language processing is increasingly vital to design. Now is the right time for human focused solutions: whether it’s wearable or not is off subject, we now talk with computers — and these things measure us.

The conversation surrounding human focused digital products, for me, is about a technically charged world designed deliberately to nourish and understand us rather than distract and replace us. The brilliance of where we just arrived is that we can now do that. The past twenty years wasn’t about getting it wrong, it was about getting us ready. We just had to make a lot of mistakes on the way to being ready. It’s been trial and error and errors are good: they help us learn — a lot. My big lesson learnt was that ninety-five percent of the digital vapour we created has been unneeded, and thanks to the power of Artificial Intelligence and complex machine learning algorithms, we can now look to a digital world with more value to society.

When digital products start to make use of complex sentiment analysis and natural language processing — mood tracking — we turn a corner into a land that is truly human focused. Natural language processing (NLP), in essence, refers to the way technology can derive the meaning out of a human sentence. This process involves near contextual levels of correlation between keywords and referencing through a database so as to provide only human-like responses. This is the progression and it’s been experimental until now. But in 2016 — we’ve just turned a corner.

When you couple NLP and sentiment analysis you start to understand the polarity of someone’s personality and self, and to do this, the system often makes use of a scaling mechanism which rates words based on the happy to sad scale. Advanced methodologies include subjectivity / objectivity differentiation, and feature based sentiment analysis. It’s a brilliant thought, technology that understands me, rather than gets in my way. For this aspect to actually work effectively, two things must be of absolute importance in the experience — either the person must interact with the technology as honestly as possible, and with no agenda, or the design should be capable of monitoring the phone / messenger / data generated organically to learn meaning. There are obviously ethical concerns with the latter, but let us imagine a world where, if even for a moment, we trust things enough to allow them to get to know us. Real time tracking would help any system become better.

What if your operating system could calculate the time you’ve spent online chatting? Monitor those chats and effectively deduce for what percentage of the time you were happy… or annoyed? Let’s take things a step further, wherein the operating system could even deduce with whom or what you are most comfortable interacting with, and with whom or what you aren’t that eager to engage, and that it will then explain the whys to you. For some, entering a field of study like psychology and reading two hundred books in the dark of the night beside ink-well and paraffin lamp is simply too time consuming. What if all of that material could live within a body of intelligence that was then able to work with you, on your level of understanding, to teach you… about yourself, and this new and interesting field of study?

When I started all those years ago I never once thought that machines would ever replace humans. There will be no ‘deus ex machina’ — god from the machine — because, frankly, when asked to recall a good experience, most people will remember something that another person did for them, not how a machine worked, and that will never change. My theories of human focused digital tell us that being human trumps all technological augmentation but designing your technology from the point of view of the human and studying what it means to be human and adding that into everything we design, from physical interfaces to invisible ones powered by words, thoughts and new ways of attaching ourselves to knowledge, is the only sure way of making sure that as the machines continue to evolve, so in turn does our humanity. These days, the latest Ai devices can’t just gauge our mood from what we say and how we say it: they can work out the best way to respond to cheer you up. All people seek happiness. This is without exception. Whatever different means they employ, they all tend to this end.

The need to uncover moments of happiness in our daily lives has led to us actively seeking small pleasures. Ones that then stimulate an experience to ripple into fleeting moments we call happy. Perhaps if we replace happiness with the word meaningfulness then we would find ourselves cultivating experiences that are able to influence our actual well-being, but also that of our wider communities. Observation suggests that the more we interact with screens, the more we are trying to generate the fleeting moment. It seems unlikely we’ll discover these pleasures indefinitely, be they fleeting or long term by interacting with computer screens.

Truly harmonious interaction with future technologies will begin once design of physical interface falls back — and through the medium of voice control and visual interaction, we will be free to be us while tech actually supports our living ways, it having been designed with us as its focus this time around.

If you look at the history of computing, starting with the jacquard loom in 1801, humans have always had to interact with machines in a really abstract, complex way. Over time, these methods have become less complex: the punch card gave way to machine code, machine code to the command line, command line to the GUI. But machines still force us to come to them on their terms, speaking their language. The next step is for machines to finally understand us on our own terms, in our own natural words, behaviours, and gestures. Andy Goodman, Fjord

The personalised experience that we can expect for our future selves will be dependent on an interface with visual and auditory capacity so that we can interact, ask questions to and receive responses from. It will be aware of us as individuals due to its ability to match our voice patterns and appearances to a profile it has built and will respond to our requests to carry out a multitude of online tasks. In a major step forward, it will accomplish these things while adapting to our moods that it recognises through tone of voice and facial expression and using what it knows of our personality to ensure it keeps us happy, while itself being in a constant state of upgrade. Its relationship with us will be proactive in as much as it will refrain from judgement — if that is what you so require. By understanding and caring for our emotional pulse, it is highly adaptive with one mission: service. Consequently, this dynamic helps it reach further into our knowledge basin to further understand what makes us happy or unhappy, wealthy or miserable, trapped or free — and this drive to develop such an intelligence is now necessary as, without an emotional dimension to the construct, technology’s ability to serve us further will soon conclude. Etiquette is what it will next come to master, as well as timing, and things we are perhaps even yet unable to comprehend, perhaps even — the nature of the self. This emotional intelligence is the result of storing and analysing only human behaviour over sustained periods that are pinned to the individual it serves. This way it is genius, in a constant state of learning and re-learning. Next, it will know how to make predictions relative to your interests. Yes, for it to leap there and become emotionally aware is a great leap, acting eventually on emotional cues themselves and becoming self-aware. In time, it experiences its own fears, angers, and pleasures. This is an interaction that will serve the person, as it is experiencing those states of being that are relevant to its progress as a piece of constantly upgrading Ai, with each and every facet to its blueprint built to serve you.

So the answer to our current digital dilemmas is, in fact, all too human. While the past favoured those who could retain and process information efficiently, the future belongs to those who can imagine a better world and work with others to make it happen, and as I stare with an almost panoramic view of my industry and how it reaches into the panorama of our world, I am seeing this future being imagined. There is a lot to look forward to, there is a lot to solve…

My optimistic view of the future is strange to some but when I look back over the last twenty years of digital ups and digital downs, it is in fact a wonderful revolution we started and all those people I’ve observed glued to monitors on desks and in their hands have played a part in advancing not only our industry, but civilisation beyond recognition. I mentioned earlier in the book this idea that perhaps we’ve all unknowingly played our part in a huge experiment on a scale never attempted and I stand by that. The gift we were really given as humans was in the ability to guess what might lie in front and plot courses toward it. To take gambles intentionally and unintentionally. As a species, we’ve pursued quite relentlessly as many altruistic goals as evil wrongs. It’s the eternal balance and I can’t imagine a world where that ever changes.

So during this vast experiment of the past two decades and the new behaviours and exaggerated mental states we encouraged, we also did something else — we collected enough observational behavioural information to start to learn and therefore stave off the silent killers that lurk amongst society. Hundreds of millions is spent around the world everyday to find cures for the diseases that steal our loved ones. Cancer, Alzheimer’s, strokes, heart attacks, obesity and smoking. These are the grim reapers. But only a fraction of the money is spent trying to prevent the invisible killers: depression, bi-polar, suicide. It’s okay — they were never forgotten about, we just couldn’t engage in the abstract. But we used the power technology gave us to build behavioural analytics platforms that track browsing behaviour for selling people better products and learning about consumerism. By chance, we also created the technology we needed to teach the silent sufferers and the lonely about themselves. To know thyself. To help thyself. What an exciting opportunity we created.

Here’s an interesting example of something we could stop in its tracks if we used technology better: suicide. Suicide is a global problem, particularly in men. It’s because a large portion of men never talk to anyone about their problems, because they feel ashamed, do not want to discuss feelings or simply ‘don’t want to make a fuss’. Now we have the technology to give these men something they are happy to talk with. There’s a generation of men whose adult lives have been marked by major social changes affecting the workplace and family and they’re in pain. They don’t have a way of offloading all that stress and inner turmoil that speaking so often releases. Congratulations, the technology we created to smooth out checkout processes and guide people through financial trouble is also able to help these men talk freely. Ai could be the key we need to turn to, to unlock the cure for the silent killers.

In the technological world, we recently hit an inflection point that’s going to give us a huge opportunity to do what I always dreamed we could — help vulnerable people, even before they know they need help. Artificial Intelligence, which has always been the subject of science fiction, is now mature enough to handle some of the most complex challenges. Perhaps even the kinds of human challenges that would traditionally require a trained professional. Now, that statement might sound controversial, but I just want to point out that we have to train a machine to learn in much the same way that a human needs to train to be a fully qualified professional. So we can really get a machine on par, if not smarter than a person with any given subject. The real challenge comes with empathy — because that’s a trait that only humans can learn — right? To a degree yes, but because of the sophisticated emotional and sentiment analytics software at our disposal, we can generate dialogues between a human and a machine that are so intelligent and delivered in such an elegant way, that they become almost as good, if not better than the real thing. It’s also worth keeping in mind that Artificial Intelligence doesn’t sleep, it doesn’t eat, it can’t make mistakes or have a bad day and it can service tens, hundreds, thousands, even unlimited numbers of people, simultaneously. We can learn incredible amounts. What a wonderful world.

As computers come to understand those that they serve, the need for a physical interface diminishes. Suddenly, it’s not about designing the next technological advance any more. Instead, we’re interacting in a way that’s not tied to a single device or location, but we are building a relationship with a machine that learns to understand us and help us understand us. To know thyself has been the mission of every culture, bible, philosophical school, existential dream, impressionist’s great journey. From Van Gough and Dali to Nietzsche and Hemingway, Jesus and Buddha to Mohammed and Yoda, people have studied the self, at all angles, upside down and inside out. Who am I, what am I? We have asked again and again and again. All tech is going to do is help contribute to the discussion, and in doing so it will help us realise more about who we are and what we are and what we are not. It will help us access the greater reality that Plato showed we can all reach for, if only we are prepared to stand… and leave the cave.

In Plato’s Allegory of The Cave, we learn of the human condition: prisoners are born into a cave and chained to a wall and their necks are fixed. As a result, are lacking the knowledge of the world outside the cave. Reality is all they can see. In Departure From The Cave, one prisoner becomes free of his cell and experiences the bright light and the immensity of the real world. But he is so overwhelmed that he in fact considers a return, back into the cave. In the allegory, Socrates is asking questions and the allegory is presented by Plato in The Republic to discuss the effect of the lack of education on our nature, on our human condition, on who we are. In the allegory, the prisoners who are born into bondage believe that the shadows they see on the cave are real and the sounds they hear come from the moving shadows, also real — as they are unable to observe any other reality. All that they know is all that they can see, taste, hear, smell and touch, which is limited in their relative world. Once the freed prisoner is taken out of the cave — where the sun beams and the discomfort of his new reality seem unadjustable… true questions emerge surrounding our condition and the relationships we have within changing environments. Soon, the prisoner adapts and is able to see things for what they are, the water, the sun, the stars and the moon. Then:

“Wouldn’t he remember his first home, what passed for wisdom there, and his fellow prisoners… Moreover, were he to return there, wouldn’t he be rather bad at their game, no longer being accustomed to the darkness?”

Then: if the prisoner returns to the cave, and tries to free his old friends to show them the light — would they not try to kill him?

“And if they were somehow able to get their hands on and kill the man who attempts to release and lead them up, wouldn’t they kill him?”

The darkness of the cave after the freed one has returned signifies the ignorance of the reality that the freed one once existed within. In the allegory, Socrates asks if the remaining prisoners would not try to explain to the freed one that the upward journey has damaged his eyes and other senses and so what he recalls he experienced was not real and so, they shall not undertake such a journey, for their own safety.

Despite the danger of returning to the cave and facing the prisoners that may kill the freed one, it is argued that the freed one should nonetheless, return — despite his now knowing what may become of him and the risk he will face from the other prisoners — because the information that he has must be told, spoken, and communicated, despite its risks.

The freed prisoner is like the philosopher, freed from the cave and stepping between worlds, his new reality of adventure and his old home, the great outdoors and the cave. Which direction he goes and when is up to him, and how much time he spends in his different worlds is for no one else to decide. Only him. Only her. Only you. Deep down, when we clear our minds of all of the messages, this is something that we already know. Because it is in the silence where we can feel the answers. From within that silence we are reaching into the future, it is like a tiny gap that lives between the now and the next bit, and it is there where pure imagination lives and creations manifest — and it is there we must bravely step into.

The advertising world and consumer society is a symptom of the noise in our minds. If we clear that, the fog in the world will also clear, leaving only the balance and flow we try to find from buying things. How we clear that noise in our own minds, I leave to you to explore. I have my ways. You will have yours.

I have tried to explain that understanding the human condition and designing for it will not only be the right move forward, it is the trajectory of all emergent technologies anyway, because this is the direction nature requires. Telling you what the human conditions are and what we are and who we are — this is too much to ask of one person, one designer, who just likes to ask why. And so I put it back to you, because it is in fact the study of the self and who we are and what is good and bad that we all need to embrace — not necessarily the answers to the questions, but simply embracing the journey of the questions. This is our joint responsibility as we build into the future, for the benefit of all beings.

Think. Feel. Build stuff. Explore. See you in the future. That’s why.

Copyright © 2017 Nexus CX Ltd

Paperback ISBN 978–0–9955361–3–5
E-Book ISBN 978–0–9955361–2–8

First published in Great Britain in 2016 by Nexus CX Ltd

All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the authors.

This publication is designed to provide accurate and authoritative information for business owners. It is sold under the express understanding that any decisions or actions you take as a result of reading this book must be based on your commercial judgement and will be at your sole risk. The author will not be held responsible for the consequences of any actions and/or decisions taken as a result of any information given or recommendations made.