AI in the world of work

Mark Williams
Predict
Published in
3 min readSep 14, 2018

Artificial Intelligence is an increasingly insistent force in the world of work. Opportunities around big data and machine learning have expanded the capabilities of AI, and articles concerning AI in the world of work are popping up like Whack-a-Moles on a daily basis. But while automation and robots are a cracking addition to HR Technology (our own Chatbot is one of the finest examples, of course) what do we really need to think about when we’re talking about AI in the world of work?

I will add to these thoughts in future blogs, but the first and most fundamental point that springs to mind is this:

AI will only be successful in the world of work if it is powered by true human thinking.

Technology is fundamentally neutral — it has no preconceptions, no innate conditioning, and no emotions. It is a clean slate — a blank book. That is, until humans get involved and the ‘author’ imprints their world view on to the technology.

“Behind every application of data or AI sits a human being with bias.”

Bias may sound like an ailment of sorts here, but it is not as scary as it sounds. Often, humans create technology to solve problems — their heart is in the right place and, provided that the correct thinking and planning has been completed, the technology will complement human existence.

Where things can come unstuck is when the influencing aspects of our human, physical world are not fully taken in to account.

Many of us will have experienced the purchase of a piece of technology that will apparently fix all of our problems, only to find that the people and processes sitting behind it weren’t ready for it. This is the same with AI in the world of work — if you are looking at forward-thinking systems, it’s important to position forward-thinking people and processes too so that the technology compliments people, rather than the other way around.

But it goes deeper than this…

Let’s take this consideration of AI in a human context and look at it in a little more detail.

There has been recent talk of replacing human recruiters with AI that detects the slightest of expressions on applicants’ faces and draws conclusions about a person’s feelings and personality based on the data collected. The natural assumption here is that AI can evaluate how good a job candidate is better than a human. And from a candidate perspective, there are ways to react to questions that will improve access to employment, which can be learned.

The technology itself would not be flawless — it could be manipulated by the aforementioned learned responses, and questions may arise around the training of the machine: what faces has it been trained on? Is the data it is able to collect solid enough to rule people out of employment? On this basis, could it even be discriminatory?

I would suggest that this is not good AI practice in the world of work. In cases like these, technology should help humans do their jobs rather than replace them altogether — particularly when the recruitment process should be facilitating human-to-human interaction.

Hopefully this example shows that humans are still needed for many of the people aspects within the world of work, and that when it comes to AI, there are bad choices nestling amongst the many excellent ones. It is up to us, as humans, to make sure that our choices when it comes to AI in the world of work amplify rather than reduce human value, fairness and happiness.

In the words of Apple CEO Tim Cook, “I’m not worried about artificial intelligence giving computers the ability to think like humans, I’m more concerned about people thinking like computers, without values or compassion, without concern for consequence.”

Originally published at www.people-first.com.

--

--

Mark Williams
Predict
Writer for

CMO, product builder R&D of human-centric tech #FutureofWork #PeopleFirst #Data #Privacy